Refine
Has Fulltext
- yes (112)
Is part of the Bibliography
- yes (112)
Year of publication
Document Type
- Doctoral Thesis (112) (remove)
Language
- English (112) (remove)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Nash-Gleichgewicht (7)
- Extremwertstatistik (6)
- Newton-Verfahren (6)
- Nichtlineare Optimierung (5)
- optimal control (5)
- Extremwerttheorie (4)
- Nichtglatte Optimierung (4)
- A-priori-Wissen (3)
- Copula (3)
- Differentialgeometrie (3)
- Differentialgleichung (3)
- Elliptische Differentialgleichung (3)
- Euler equations (3)
- Finite-Elemente-Methode (3)
- Finite-Volumen-Methode (3)
- Fokker-Planck-Gleichung (3)
- Galerkin-Methode (3)
- Konfidenzintervall (3)
- MPEC (3)
- Magnetoelastizität (3)
- Monodromie (3)
- Nichtkonvexe Optimierung (3)
- Numerisches Verfahren (3)
- Pareto-Verteilung (3)
- Partielle Differentialgleichung (3)
- Spieltheorie (3)
- Stabilität (3)
- Zetafunktion (3)
- Audit sampling (2)
- Belyi map (2)
- Biholomorphe Abbildung (2)
- Binomialverteilung (2)
- Controllability (2)
- D-Norm (2)
- Darstellungsmatrix (2)
- Deformationsquantisierung (2)
- Dirichlet-Reihe (2)
- Eigenwert (2)
- Extreme Value Theory (2)
- Finanzmathematik (2)
- Galois-Theorie (2)
- Gamma-Konvergenz (2)
- Generalized Nash Equilibrium Problem (2)
- Gleichverteilung (2)
- Homogener Raum (2)
- Hurwitz-Raum (2)
- Kohomologie (2)
- Kombinatorik (2)
- Kompakte Lie-Gruppe (2)
- Komplementaritätsproblem (2)
- Kontrolltheorie (2)
- Kopula <Mathematik> (2)
- Ljapunov-Funktion (2)
- Ljapunov-Stabilitätstheorie (2)
- Loewner theory (2)
- Loewner-Theorie (2)
- Magnetohydrodynamik (2)
- Mathematik (2)
- Mehrgitterverfahren (2)
- Mikromagnetismus (2)
- Newton Methods (2)
- Nichtlineare Kontrolltheorie (2)
- Nikaido-Isoda function (2)
- Numerical analysis (2)
- Numerische Strömungssimulation (2)
- Optimale Steuerung (2)
- PDE (2)
- Pontryagin maximum principle (2)
- Prediction interval (2)
- Prior information (2)
- Riemannsche Zetafunktion (2)
- Schlichte Funktion (2)
- Simulation (2)
- Stability (2)
- Stochastischer Prozess (2)
- Systemisches Risiko (2)
- Verallgemeinertes Nash-Gleichgewichtsproblem (2)
- Zahlentheorie (2)
- deformation quantization (2)
- extreme value theory (2)
- fast vollständig zerlegbare Gruppe (2)
- finite elements (2)
- low Mach number (2)
- multigrid (2)
- optimization (2)
- prime number (2)
- representing matrix (2)
- Überlagerung <Mathematik> (2)
- (0 (1)
- *-algebra (1)
- 1)-Matrix (1)
- 1)-matrix (1)
- 1655-1705> (1)
- ADMM (1)
- Abbildungseigenschaften (1)
- Abelsche Gruppe (1)
- Abhängigkeitsmaß (1)
- Abstiegsverfahren (1)
- Abstoßungsphänomen (1)
- Acoustic equations (1)
- Adolf Hurwitz (1)
- Affine Skalierungsverfahren (1)
- Aggregation (1)
- Algebraic Curves (1)
- Algebraic signal processing (1)
- Algebraische Kurve (1)
- Algebraische Signalverarbeitung (1)
- Algebraische Zahlentheorie (1)
- Algebraischer Funktionenkörper (1)
- Analytische Zahlentheorie (1)
- Angewandte Geowissenschaften (1)
- Angular Density (1)
- Anpassungstest (1)
- Approximation (1)
- Arbitrary Lagrangian-Eulerian (1)
- Archimedean copula (1)
- Argumentation (1)
- Asymptotic Preserving (1)
- Asymptotic independence (1)
- Atmosphäre (1)
- Augmented Lagrangian (1)
- Augmented Lagrangian methods (1)
- Automorphismengruppe (1)
- Banach-Raum (1)
- Beatty sequence (1)
- Bedingte Unabhängigkeit (1)
- Belyi-Funktionen (1)
- Beobachter (1)
- Bernoulli (1)
- Bernoulli Raum (1)
- Bernoulli Space (1)
- Beweistheorie (1)
- Bildrekonstruktion (1)
- Bilinear differential games (1)
- Blinear Quantum Control Systems (1)
- Box-Restriktionen (1)
- Bregman distance (1)
- Brittle fracture (1)
- Brüder Hurwitz (1)
- Burgers-Gleichung (1)
- CSIDH (1)
- Calculus of Variations (1)
- Calculus of variations (1)
- Carbon dioxide capture (1)
- Cardinality Constraints (1)
- Cayley graph (1)
- Central limit theorem under dependence (1)
- Charakteranalyse (1)
- Coisotropic reduction (1)
- Complex Continued Fractions (1)
- Composite optimization problems (1)
- Compressed Sensing (1)
- Confidence interval (1)
- Confidence intervals (1)
- Conformal Metrics (1)
- Conjugate function (1)
- Conjugate gradient method (1)
- Conservation Laws (1)
- Constrained optimization (1)
- Constraint-Programmierung (1)
- Continuous Sample Path (1)
- Copula <Mathematik> (1)
- Counterparty Risk (1)
- Credibility interval (1)
- Curvature Equation (1)
- D-Norms (1)
- DC optimization (1)
- Darstellung vonPseudo-Metriken (1)
- Data Exploration (1)
- Dichtefunktionalformalismus (1)
- Differential Games (1)
- Differentialgleichungssystem (1)
- Digitale Signalverarbeitung (1)
- Dimension reduction (1)
- Diophantine approximation (1)
- Discontinuous Galerkin method (1)
- Discrete to continuum (1)
- Diskrepanz (1)
- Dual gap function (1)
- Dualitätstheorie (1)
- Dynamic Geometry Environment (1)
- Dynamical Systems (1)
- Dynamical system (1)
- Dynamische Geometriesysteme (1)
- Dynamische Optimierung (1)
- Dynamisches System (1)
- E-Learning (1)
- Eigenmode (1)
- Elliptic equations (1)
- Elliptische Kurve (1)
- Endliche Geometrie (1)
- Ensemble optimal control (1)
- Entropiebedingung (1)
- Entropielösung (1)
- Entropy admissibility condition (1)
- Epstein zeta-function (1)
- Epstein, Paul (1)
- Erhaltungsgleichungen (1)
- Estimation (1)
- Eulersche Differentialgleichung (1)
- Exact-controllability (1)
- Exceedance Stability (1)
- Existenz schwacher Lösungen (1)
- Existenz und Eindeutigkeit (1)
- Explicit Computation (1)
- Explizite Berechnung (1)
- Exponential smoothing (1)
- Exponential smoothing with covariates (1)
- Extreme value copula (1)
- Extremwertregelung (1)
- Extremwertverteilung (1)
- Faltungscode (1)
- Fast vollständig zerlegbare Gruppe (1)
- Fermentation (1)
- Financial Networks (1)
- Finanzielle Netzwerke (1)
- Finite Elemente (1)
- Finite support distributions (1)
- Firmwert (1)
- Fixpunktsatz (1)
- Fluid (1)
- Fluid-Partikel-Strömung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Fluid-structure interaction (1)
- Fluidnetzwerk (1)
- Flüssigkristall (1)
- Fokalmannigfaltigkeit (1)
- Fokker-Planck (1)
- Fokker-Planck optimality systems (1)
- Formoptimierung (1)
- Fragility Index (1)
- Fragilitätsindex (1)
- Freies Randwertproblem (1)
- Function Fields (1)
- Functional differential equations (1)
- Functions with Primitive (1)
- Funktion von beschränkter Variation (1)
- Funktionen mit Stammfunktion (1)
- Funktionenkörper (1)
- Funktionentheorie (1)
- GPD (1)
- GPD-Flow (1)
- Galois theory (1)
- Galois-Erweiterung (1)
- Galois-Feld (1)
- Galoistheorie (1)
- Gasgemisch (1)
- Gebäude (1)
- Generalized Nash Equilibrium (1)
- Generalized Nash equilibrium (1)
- Generalized Pareto Distribution (1)
- Generalized Pareto copula (1)
- Geometric constraints (1)
- Gestaltoptimierung (1)
- Gleichmäßige Konvergenz (1)
- Gleichungssysteme (1)
- Globale Analysis (1)
- Goodness-of-Fit Test (1)
- Graph (1)
- Graph spectrum (1)
- Grassmann Manifold (1)
- Grassmann-Mannigfaltigkeit (1)
- Gravitationsfeld (1)
- Guignard CQ (1)
- HDL (1)
- Hamilton Systeme (1)
- Hamilton Sytstems (1)
- Hamilton-Jacobi-Differentialgleichung (1)
- Hierarchische Matrix (1)
- High-frequency data (1)
- Hilbert-Raum (1)
- Hilfe-System (1)
- Homicidal Chauffeur game (1)
- Homogenisierung <Mathematik> (1)
- Homologische Algebra (1)
- Hurwitz spaces (1)
- Hybrid Dynamical Systems (1)
- Hybridsystem (1)
- Hydrodynamische Grenzwerte (1)
- Hyperbolic Partial Differential Equations (1)
- Hyperbolische Differentialgleichung (1)
- Hypertranscendence (1)
- IMEX scheme (1)
- Ignorance (1)
- Ignoranz (1)
- Image Registration (1)
- Infinite Optimierung (1)
- Innere-Punkte-Methode (1)
- Integral graph (1)
- Integralgleichung (1)
- Integrodifferentialgleichung (1)
- Interactive Help System (1)
- Interconnection (1)
- Invarianter Unterraum (1)
- Inverse Iteration (1)
- Inzidenzmatrix (1)
- Isomorphie (1)
- Isomorphieklasse (1)
- Isoparametrische Hyperfläche (1)
- Jacobi-Eigenwert-Verfahren (1)
- Jacobi-type eigenvalue methods (1)
- Jacobi-ähnliches Verfahren (1)
- Jacobsthal function (1)
- Jakob <Mathematiker (1)
- Julius Hurwitz (1)
- Kapitalverflechtung (1)
- Kapitalverflechtungen (1)
- Karush-Kuhn-Tucker-Bedingungen (1)
- Kettenbruch (1)
- Klassifikation (1)
- Kloosterman sum (1)
- Kollinearität (1)
- Kombinatorische Optimierung (1)
- Kombinatorische Zahlentheorie (1)
- Komprimierte Abtastung (1)
- Kondition <Mathematik> (1)
- Konforme Abbildungen (1)
- Konforme Metrik (1)
- Konjugierte-Gradienten-Methode (1)
- Konstruktionsmethoden (1)
- Kontinuitätsgleichung (1)
- Kontrollsystem (1)
- Konvergenz (1)
- Konvergenz bei quadratischem Eigenwertproblem (1)
- Konvexe Analysis (1)
- Korrekt gestelltes Problem (1)
- Korrelation (1)
- Kryptologie (1)
- Kurdyka--{\L}ojasiewicz property (1)
- Lagrange-Methode (1)
- Landau type theorem (1)
- Laser (1)
- Lasersimulation (1)
- Least squares estimation (1)
- Lennard-Jones-Potenzial (1)
- Lie groups (1)
- Lie-Gruppe (1)
- Liegruppen (1)
- Lindblad-Kossakowski Master Equation (1)
- Lineare Regression (1)
- Linearer Operator (1)
- Lineares System (1)
- Liouville and transport equations (1)
- Local Lipschitz continuity (1)
- Lyapunov Funktion (1)
- Lyapunov Stability (1)
- Lyapunov functions (1)
- M-Stationär (1)
- M-stationarity (1)
- MHD (1)
- MPCC (1)
- MPVC (1)
- Machzahl (1)
- Magnetic Resonance Imaging (1)
- Magnetoelasticity (1)
- Magnetohydrodynamics (1)
- Magnetoviscoelastic Fluids (1)
- Magnetoviskoelastische Flüsse (1)
- Mapping Properties (1)
- Master-Gleichung (1)
- Mathematical modeling (1)
- Mathematikunterricht (1)
- Mathematische Modellierung (1)
- Mathematisches Modell (1)
- Matrixpolynom (1)
- Matrizenpolynom (1)
- Matrizenzerlegung (1)
- Maße für Quantenverschränkung (1)
- Medical image reconstruction (1)
- Mehragentensystem (1)
- Mehrdimensionale Signalverarbeitung (1)
- Mehrgitter (1)
- Metrologie (1)
- Minimizing movements (1)
- Minimum Information Probability Distribution (1)
- Minimum Information Wahrscheinlichkeitsverteilung (1)
- Mobiler Roboter (1)
- Modellierung (1)
- Moment <Stochastik> (1)
- Monodromy (1)
- Monte Carlo Simulation (1)
- Moving mesh method (1)
- Multi-agent systems (1)
- Multi-dimensional SPDEs (1)
- Multivariate Generalized Pareto Distributions (1)
- Multivariate order statistics (1)
- Multivariate statistics (1)
- Multivariate verallgemeine Pareto-Verteilungen (1)
- Nash Equilibrium Problem (1)
- Navier-Stokes equations (1)
- Navier-Stokes-Gleichung (1)
- Near-Isomorphie (1)
- Nematic Liquid Crystals (1)
- Nematische Flüssigkristalle (1)
- Newton method (1)
- Newton's method (1)
- Newton-Raphson Method (1)
- Newton-Raphson Verfahren (1)
- Newtonverfahren (1)
- Nichtglatte Analysis (1)
- Nichtlineare Funktionalgleichung (1)
- Nichtlinearer Operator (1)
- Nichtlineares System (1)
- Nikaido-Isoda Funktion (1)
- Niveaustufen des Beweises (1)
- Non-smooth optimal control (1)
- Nonlinear systems (1)
- Nonparametric Inference (1)
- Nonsmooth optimization (1)
- Nullstelle (1)
- Numerical Asset Valuation (1)
- Numerical Methods (1)
- Numerik (1)
- Numerische Mathematik (1)
- One-dimensional SPDEs (1)
- Operatortheorie (1)
- Optimal Control (1)
- Optimal control problem (1)
- Optimalitätsbedingung (1)
- Optimierung / Nebenbedingung (1)
- Optimierung auf Mannigfaltigkeiten (1)
- Optimierungsproblem (1)
- Optimization on Lie Groups (1)
- Order Statistics (1)
- Overstatement models (1)
- PDEs (1)
- Parabolic equations (1)
- Parabolische Differentialgleichung (1)
- Parametric inference (1)
- Parametric optimization (1)
- Parametrische Optimierung (1)
- Parametrisierung (1)
- Partielle Differentialgleichungen (1)
- Peaks over Threshold (1)
- Periodic homogenization (1)
- Plasma (1)
- Poisson Gleichung (1)
- Poisson equation (1)
- Polyatomare Verbindungen (1)
- Polynomial matrices (1)
- Pontrjagin-Maximumprinzip (1)
- Pontryagin Maximum Principle (1)
- Pontryagin Maximum Prinzip (1)
- Pontryagins's maximum principle (1)
- Post-Quantum-Kryptografie (1)
- Prediction Procedure (1)
- Primzahl (1)
- Probability theory (1)
- Prognose (1)
- Projektive Ebene (1)
- Proof (1)
- Proving Level (1)
- Proximal Method (1)
- Proximal-Punkt-Verfahren (1)
- Präkonditionierung (1)
- Pseudometrik (1)
- Quadratischer Zahlkörper (1)
- Quantenmechanik (1)
- Quantenmechanisches System (1)
- Quantum control (1)
- Quasi-Variational Inequality (1)
- Quasi-Variationsungleichung (1)
- Quasi-variational inequalities (1)
- Quasiconformal automorphism (1)
- Quasikonforme Abbildung (1)
- Randomness (1)
- Registrierung <Bildverarbeitung> (1)
- Regressionsanalyse (1)
- Regularisation Methods (1)
- Regularisierung (1)
- Regularisierungsverfahren (1)
- Regularized gap function (1)
- Regulator (1)
- Regulator <Mathematik> (1)
- Rekord (1)
- Relaxation method (1)
- Restklasse (1)
- Ridge-Regression (1)
- Riemann Hypothesis (1)
- Riemann zeta-function (1)
- Riemannian manifolds (1)
- Riemannian optimization (1)
- Riemannsche Geometrie (1)
- Riemannsche Mannigfaltigkeiten (1)
- Riemannsche Optimierung (1)
- Risikomanagement (1)
- Rothe method (1)
- Runge-type Theorems (1)
- Scheme for solving optimal control problems (1)
- Schnelle Fourier-Transformation (1)
- Schur ring (1)
- Schwache Kompaktheit (1)
- Schwache Lösungen (1)
- Selberg Class (1)
- Selberg Klasse (1)
- Selbergsche L-Reihe (1)
- Semismooth Newton Method (1)
- Sequential Quadratic Hamiltonian scheme (1)
- Sequential quadratic Hamiltonian scheme (1)
- Set-valued mapping (1)
- Shape Optimization (1)
- Skalierungsfunktion (1)
- Small-Gain Theorem (1)
- Softwareentwicklung (1)
- Sparsity (1)
- Sphäre (1)
- Spin systems (1)
- Spinsystem (1)
- Starke Kopplung (1)
- Statistik (1)
- Steuerbarkeit (1)
- Stochastic Algorithms (1)
- Stochastic Process (1)
- Stochastic homogenization (1)
- Stochastik (1)
- Stochastische Optimierung (1)
- Stochastische partielle Differentialgleichung (1)
- Stochastisches System (1)
- Structrual Model (1)
- Strömung (1)
- Symmetrische Konfiguration (1)
- Symplektische Geometrie (1)
- System von partiellen Differentialgleichungen (1)
- Systemhalbgruppen (1)
- Systemic Risk (1)
- Systemsemigroups (1)
- TD Kohn-Sham equations (1)
- TDDFT (1)
- Tail-behavior (1)
- Topologieoptimierung (1)
- Torsion-free abelian groups (1)
- Torsionsfreie Abelsche Gruppe (1)
- Torsionsfreie abelsche Gruppe (1)
- Total Variation (1)
- Totale Variation (1)
- Toulmin Modell (1)
- Transitive Lie Groups (1)
- Transportkoeffizient (1)
- Uncertainty (1)
- Uniform distribution modulo one (1)
- Uniform topology (1)
- Universal Functions (1)
- Universality (1)
- Universalität (1)
- Unsicherheit (1)
- Untergruppe (1)
- Unternehmensbewertung (1)
- Unterraumsuche (1)
- Unterräume (1)
- Value ranges (1)
- Variationsrechnung (1)
- Variationsungleichung (1)
- Verfahren der konjugierten Gradienten (1)
- Vorhersagbarkeit (1)
- Vorhersagetheorie (1)
- Vorhersageverfahren (1)
- Vorkonditionierer (1)
- Wahrscheinlichkeitsrechnung (1)
- Wahrscheinlichkeitstheorie (1)
- Wahrscheinlichkeitsverteilung (1)
- Warteschlangennetz (1)
- Weak Solutions (1)
- Weibull distribution (1)
- Weibull-Verteilung (1)
- Wein (1)
- Well-Balanced (1)
- Well-posedness (1)
- Wettbewerbsdesign (1)
- ZRP (1)
- Zeitdiskrete Approximation (1)
- Zeitoptimale Regelung (1)
- Zero Range Prozess (1)
- Zero-divisor graph (1)
- Zero-inflation (1)
- Zeta-Functions (1)
- Zeta-function (1)
- Zufall (1)
- Zwei-Ebenen-Optimierung (1)
- a-point distribution (1)
- a-posteriori error estimates (1)
- abelian groups (1)
- accuracy estimate (1)
- adaptive refinement (1)
- affine scaling methods (1)
- algebraic aggregation (1)
- algebraic function field (1)
- algebraische Aggregation (1)
- almost completely decomosable group (1)
- almost completely decomposable group (1)
- almost completely decomposable groups (1)
- augmented Lagrangian method (1)
- bi-steerable robot (1)
- bias (1)
- black box (1)
- bound constraints (1)
- buildings (1)
- chordal Loewner equation (1)
- circular arc polygon domain (1)
- cohomology (1)
- collinearity (1)
- complementary problems (1)
- composite optimization (1)
- confidence interval (1)
- conformal mapping (1)
- conformal pseudo-metrics (1)
- conservation law (1)
- conservation laws (1)
- convergence (1)
- convergence for quadratic eigenvalue problems (1)
- convergent star product (1)
- convolutional code (1)
- coverings (1)
- cross-ownership (1)
- design (1)
- discrete-to-continuum (1)
- distribution modulo one (1)
- eigenmode (1)
- elliptic curves (1)
- enatnglement measure (1)
- entropy inequality (1)
- equivariant cohomology (1)
- ergodic transformation (1)
- error estimate (1)
- erstes Randwertproblem in PDE (1)
- exceedance counts (1)
- explicit discontinuous Galerkin (1)
- extremal coefficient (1)
- extremum seeking control (1)
- financial network (1)
- finanzielles Netzwerk (1)
- finite fields (1)
- finite projective plane (1)
- finite volume method (1)
- finite volume methods (1)
- firm valuation (1)
- first boundary value problem in PDE (1)
- fluid networks (1)
- fracture (1)
- freies Randwertproblem (1)
- function identification (1)
- functional D-norm (1)
- galois extensions (1)
- gamma-convergence (1)
- generalized quadrangles (1)
- geometric control (1)
- geometrische Kontrolltheorie (1)
- gewichtete Gleichverteilung modulo eins (1)
- global convergence (1)
- globale Konvergenz (1)
- gradient-like systems (1)
- gradientenähnliche Systeme (1)
- halbeinfache Lie Algebren (1)
- hierarchical matrix (1)
- higher order methods (1)
- homogene Raüme (1)
- homogeneous spaces (1)
- hydrodynamic limits (1)
- imaginary quadratic field (1)
- incidence matrix (1)
- infinite dimensional optimization (1)
- inverse Iteration (1)
- isogeny-based cryptography (1)
- isomorph (1)
- isomorphic (1)
- isomorpism (1)
- jump-diffusion processes (1)
- kinetic description of gases (1)
- konforme Pseudo-Metriken (1)
- large-scale (1)
- laser simulation (1)
- linear system (1)
- matrix decomposition (1)
- max-linear model (1)
- max-stable process (1)
- monodromy groups (1)
- multi-fluid mixture (1)
- multiply connected domain (1)
- multivariate Extreme Value Distribution (1)
- multivariate generalized Pareto distribution (1)
- near-isomorph (1)
- near-isomorphic (1)
- near-isomorphism (1)
- nichtglatt (1)
- nichtglatte Newton-artige Verfahren (1)
- nichtholonomes System (1)
- nichtlineare & gemischte Komplementaritätsprobleme (1)
- nichtlineare Optimierung (1)
- non-convex optimal control problems (1)
- nonholonomic system (1)
- nonlinear and mixed complementarity problems (1)
- nonlinear least squares reformulation (1)
- nonlinear optimization (1)
- nonlinear systems (1)
- numerical approximations (1)
- numerical methods (1)
- observer (1)
- optimal control problems (1)
- optimal solution mapping (1)
- optimization on manifolds (1)
- parametrization (1)
- partial differential equation (1)
- partial differetial equations (1)
- partial integro-differential equations (1)
- plasma modelling (1)
- polyatomic molecules (1)
- preconditioning (1)
- proximal Newton method (1)
- proximal gradient method (1)
- quadratic convergence (1)
- quadratic number field (1)
- quadratische Konvergenz (1)
- quasi-continuum (1)
- queueing networks (1)
- radial Loewner equation (1)
- random walk (1)
- reaction-diffusion (1)
- reduced residues (1)
- regression (1)
- regularization (1)
- regulating (1)
- regulator (1)
- regulierend (1)
- relaxation method (1)
- repelling phenomenon (1)
- representation of pseudo-metrics (1)
- ridge regression (1)
- semisimple Lie algebras (1)
- semismooth (1)
- semismooth Newton method (1)
- semismooth Newton-type methods (1)
- singularly perturbed problem (1)
- singulär gestörtes Problem (1)
- special sweeps (1)
- spezielle Sweep-Methoden (1)
- stability (1)
- state constraints (1)
- stochastic processes (1)
- stochastischer Prozess (1)
- structural model (1)
- structured normal form problem (1)
- strukturierte Normalformprobleme (1)
- subspace clustering (1)
- subspaces (1)
- sufficient optimality conditions (1)
- symmetric configuration (1)
- symplectic geometry (1)
- systemic risk (1)
- tail dependence (1)
- tensor rank (1)
- tight (1)
- time-optimal control (1)
- time-varying (1)
- total variation (1)
- transport coefficients (1)
- typically real functions (1)
- univalent functions (1)
- universality (1)
- verallgemeinerte Vierecke (1)
- vorticity preserving (1)
- weighted uniform distribution modulo one (1)
- well posedness (1)
- well-balanced (1)
- well-balancing (1)
- wine fermentation (1)
- zeitoptimale Steuerung (1)
- zero-finding (1)
- zweiachsgelenkter Roboter (1)
- Überschreitungen (1)
- Überschreitungsanzahl (1)
- Čebyšev-Polynome (1)
Institute
- Institut für Mathematik (112) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
The topic of this thesis is the theoretical and numerical analysis of optimal control problems, whose differential constraints are given by Fokker-Planck models related to jump-diffusion processes. We tackle the issue of controlling a stochastic process by formulating a deterministic optimization problem. The
key idea of our approach is to focus on the probability density function of the process,
whose time evolution is modeled by the Fokker-Planck equation. Our control framework is advantageous since it allows to model the action of the control over the entire range of the process, whose statistics are characterized by the shape of its probability density function.
We first investigate jump-diffusion processes, illustrating their main properties. We define stochastic initial-value problems and present results on the existence and uniqueness of their solutions. We then discuss how numerical solutions of stochastic problems are computed, focusing on the Euler-Maruyama method.
We put our attention to jump-diffusion models with time- and space-dependent coefficients and jumps given by a compound Poisson process. We derive the related Fokker-Planck equations, which take the form of partial integro-differential equations. Their differential term is governed by a parabolic operator, while the nonlocal integral operator is due to the presence of the jumps. The derivation is carried out in two cases. On the one hand, we consider a process with unbounded range. On the other hand, we confine the dynamic of the sample paths to a bounded domain, and thus the behavior of the process in proximity of the boundaries has to be specified. Throughout this thesis, we set the barriers of the domain to be reflecting.
The Fokker-Planck equation, endowed with initial and boundary conditions, gives rise to Fokker-Planck problems. Their solvability is discussed in suitable functional spaces. The properties of their solutions are examined, namely their regularity, positivity and probability mass conservation. Since closed-form solutions to Fokker-Planck problems are usually not available, one has to resort to numerical methods.
The first main achievement of this thesis is the definition and analysis of conservative and positive-preserving numerical methods for Fokker-Planck problems. Our SIMEX1 and SIMEX2 (Splitting-Implicit-Explicit) schemes are defined within the framework given by the method of lines. The differential operator is discretized by a finite volume scheme given by the Chang-Cooper method, while the integral operator is approximated by a mid-point rule. This leads to a large system of ordinary differential equations, that we approximate with the Strang-Marchuk splitting method. This technique decomposes the original problem in a
sequence of different subproblems with simpler structure, which are separately solved and linked to each other through initial conditions and final solutions. After performing the splitting step, we carry out the time integration with first- and second-order time-differencing methods. These steps give rise to the SIMEX1 and SIMEX2 methods, respectively.
A full convergence and stability analysis of our schemes is included. Moreover, we are able to prove that the positivity and the mass conservation of the solution to Fokker-Planck problems are satisfied at the discrete level by the numerical solutions computed with the SIMEX schemes.
The second main achievement of this thesis is the theoretical analysis and the numerical solution of optimal control problems governed by Fokker-Planck models. The field of optimal control deals with finding control functions in such a way that given cost functionals are minimized. Our framework aims at the minimization of the difference between a known sequence of values and the first moment of a jump-diffusion process; therefore, this formulation can also be considered as a parameter estimation problem for stochastic processes. Two cases are discussed, in which the form of the cost functional is continuous-in-time and discrete-in-time, respectively.
The control variable enters the state equation as a coefficient of the Fokker-Planck partial integro-differential operator. We also include in the cost functional a $L^1$-penalization term, which enhances the sparsity of the solution. Therefore, the resulting optimization problem is nonconvex and nonsmooth. We derive the first-order optimality systems satisfied by the optimal solution. The computation of the optimal solution is carried out by means of proximal iterative schemes in an infinite-dimensional framework.
This thesis deals with value sets, i.e. the question of what the set of values that a set of functions can take in a prescribed point looks like.
Interest in such problems has been around for a long time; a first answer was given by the Schwarz lemma in the 19th century, and soon various refinements were proven.
Since the 1930s, a powerful method for solving such problems has been developed, namely Loewner theory. We make extensive use of this tool, as well as variation methods which go back to Schiffer to examine the following questions:
We describe the set of values a schlicht normalised function on the unit disc with prescribed derivative at the origin can take by applying Pontryagin's maximum principle to the radial Loewner equation.
We then determine the value ranges for the set of holomorphic, normalised, and bounded functions that have only real coefficients in their power series expansion around 0, and for the smaller set of functions which are additionally typically real.
Furthermore, we describe the values a univalent self-mapping of the upper half-plane with hydrodynamical normalization which is symmetric with respect to the imaginary axis can take.
Lastly, we give a necessary condition for a schlicht bounded function f on the unit disc to have extremal derivative in a point z where its value f(z) is fixed by using variation methods.
The thesis focuses on the valuation of firms in a system context where cross-holdings of the firms in liabilities and equities are allowed and, therefore, systemic risk can be modeled on a structural level. A main property of such models is that for the determination of the firm values a pricing equilibrium has to be found. While there exists a small but growing amount of research on the existence and the uniqueness of such price equilibria, the literature is still somewhat inconsistent. An example for this fact is that different authors define the underlying financial system on differing ways. Moreover, only few articles pay intense attention on procedures to find the pricing equilibria. In the existing publications, the provided algorithms mainly reflect the individual authors' particular approach to the problem. Additionally, all existing methods do have the drawback of potentially infinite runtime.
For these reasons, the objects of this thesis are as follows. First, a definition of a financial system is introduced in its most general form in Chapter 2. It is shown that under a fairly mild regularity condition the financial system has a unique existing payment equilibrium. In Chapter 3, some extensions and differing definitions of financial systems that exist in literature are presented and it is shown how these models can be embedded into the general model from the proceeding chapter. Second, an overview of existing valuation algorithms to find the equilibrium is given in Chapter 4, where the existing methods are generalized and their corresponding mathematical properties are highlighted. Third, a complete new class of valuation algorithms is developed in Chapter 4 that includes the additional information whether a firm is in default or solvent under a current payment vector. This results in procedures that are able find the solution of the system in a finite number of iteration steps. In Chapter 5, the developed concepts of Chapter 4 are applied to more general financial systems where more than one seniority level of debt is present. Chapter 6 develops optimal starting vectors for non-finite algorithms and Chapter 7 compares the existing and the new developed algorithms concerning their efficiency in an extensive simulation study covering a wide range of possible settings for financial systems.
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
The present thesis considers the development and analysis of arbitrary Lagrangian-Eulerian
discontinuous Galerkin (ALE-DG) methods with time-dependent approximation spaces for
conservation laws and the Hamilton-Jacobi equations.
Fundamentals about conservation laws, Hamilton-Jacobi equations and discontinuous Galerkin
methods are presented. In particular, issues in the development of discontinuous Galerkin (DG)
methods for the Hamilton-Jacobi equations are discussed.
The development of the ALE-DG methods based on the assumption that the distribution of
the grid points is explicitly given for an upcoming time level. This assumption allows to construct a time-dependent local affine linear mapping to a reference cell and a time-dependent
finite element test function space. In addition, a version of Reynolds’ transport theorem can be
proven.
For the fully-discrete ALE-DG method for nonlinear scalar conservation laws the geometric
conservation law and a local maximum principle are proven. Furthermore, conditions for slope
limiters are stated. These conditions ensure the total variation stability of the method. In addition, entropy stability is discussed. For the corresponding semi-discrete ALE-DG method,
error estimates are proven. If a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell, the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence for monotone fuxes and the optimal $(k+1)$ convergence for an upwind flux are proven in the $\mathrm{L}^{2}$-norm. The capability of the method is shown by numerical examples for nonlinear conservation laws.
Likewise, for the semi-discrete ALE-DG method for nonlinear Hamilton-Jacobi equations, error
estimates are proven. In the one dimensional case the optimal $\left(k+1\right)$ convergence and in the two dimensional case the sub-optimal $\left(k+\frac{1}{2}\right)$ convergence are proven in the $\mathrm{L}^{2}$-norm, if a piecewise $\mathcal{P}^{k}$ polynomial approximation space is used on the reference cell. For the fullydiscrete method, the geometric conservation is proven and for the piecewise constant forward Euler step the convergence of the method to the unique physical relevant solution is discussed.
Mathematical modelling, simulation, and optimisation are core methodologies for future
developments in engineering, natural, and life sciences. This work aims at applying these
mathematical techniques in the field of biological processes with a focus on the wine
fermentation process that is chosen as a representative model.
In the literature, basic models for the wine fermentation process consist of a system of
ordinary differential equations. They model the evolution of the yeast population number
as well as the concentrations of assimilable nitrogen, sugar, and ethanol. In this thesis,
the concentration of molecular oxygen is also included in order to model the change of
the metabolism of the yeast from an aerobic to an anaerobic one. Further, a more sophisticated
toxicity function is used. It provides simulation results that match experimental
measurements better than a linear toxicity model. Moreover, a further equation for the
temperature plays a crucial role in this work as it opens a way to influence the fermentation
process in a desired way by changing the temperature of the system via a cooling
mechanism. From the view of the wine industry, it is necessary to cope with large scale
fermentation vessels, where spatial inhomogeneities of concentrations and temperature
are likely to arise. Therefore, a system of reaction-diffusion equations is formulated in
this work, which acts as an approximation for a model including computationally very
expensive fluid dynamics.
In addition to the modelling issues, an optimal control problem for the proposed
reaction-diffusion fermentation model with temperature boundary control is presented
and analysed. Variational methods are used to prove the existence of unique weak solutions
to this non-linear problem. In this framework, it is possible to exploit the Hilbert
space structure of state and control spaces to prove the existence of optimal controls.
Additionally, first-order necessary optimality conditions are presented. They characterise
controls that minimise an objective functional with the purpose to minimise the final
sugar concentration. A numerical experiment shows that the final concentration of sugar
can be reduced by a suitably chosen temperature control.
The second part of this thesis deals with the identification of an unknown function
that participates in a dynamical model. For models with ordinary differential equations,
where parts of the dynamic cannot be deduced due to the complexity of the underlying
phenomena, a minimisation problem is formulated. By minimising the deviations of simulation
results and measurements the best possible function from a trial function space
is found. The analysis of this function identification problem covers the proof of the
differentiability of the function–to–state operator, the existence of minimisers, and the
sensitivity analysis by means of the data–to–function mapping. Moreover, the presented
function identification method is extended to stochastic differential equations. Here, the
objective functional consists of the difference of measured values and the statistical expected
value of the stochastic process solving the stochastic differential equation. Using a
Fokker-Planck equation that governs the probability density function of the process, the
probabilistic problem of simulating a stochastic process is cast to a deterministic partial
differential equation. Proofs of unique solvability of the forward equation, the existence of
minimisers, and first-order necessary optimality conditions are presented. The application
of the function identification framework to the wine fermentation model aims at finding
the shape of the toxicity function and is carried out for the deterministic as well as the
stochastic case.
Extreme value theory is concerned with the stochastic modeling of rare and extreme events. While fundamental theories of classical stochastics - such as the laws of small numbers or the central limit theorem - are used to investigate the asymptotic behavior of the sum of random variables, extreme value theory focuses on the maximum or minimum of a set of observations. The limit distribution of the normalized sample maximum among a sequence of independent and identically distributed random variables can be characterized by means of so-called max-stable distributions.
This dissertation concerns with different aspects of the theory of max-stable random vectors and stochastic processes. In particular, the concept of 'differentiability in distribution' of a max-stable process is introduced and investigated. Moreover, 'generalized max-linear models' are introduced in order to interpolate a known max-stable random vector by a max-stable process. Further, the connection between extreme value theory and multivariate records is established. In particular, so-called 'complete' and 'simple' records are introduced as well as it is examined their asymptotic behavior.
Proximal methods are iterative optimization techniques for functionals, J = J1 + J2, consisting of a differentiable part J2 and a possibly nondifferentiable part J1. In this thesis proximal methods for finite- and infinite-dimensional optimization problems are discussed. In finite dimensions, they solve l1- and TV-minimization problems that are effectively applied to image reconstruction in magnetic resonance imaging (MRI). Convergence of these methods in this setting is proved. The proposed proximal scheme is compared to a split proximal scheme and it achieves a better signal-to-noise ratio. In addition, an application that uses parallel imaging is presented.
In infinite dimensions, these methods are discussed to solve nonsmooth linear and bilinear elliptic and parabolic optimal control problems. In particular, fast convergence of these methods is proved. Furthermore, for benchmarking purposes, truncated proximal schemes are compared to an inexact semismooth Newton method. Results of numerical experiments are presented to demonstrate the computational effectiveness of our proximal schemes that need less computation time than the semismooth Newton method in most cases. Results of numerical experiments are presented that successfully validate the theoretical estimates.
Based on the work of Eisenberg and Noe [2001], Suzuki [2002], Elsinger [2009] and Fischer [2014], we consider a generalization of Merton's asset valuation approach where n firms are linked by cross-ownership of equities and liabilities. Each firm is assumed to have a single outstanding liability, whereas its assets consist of one system-exogenous asset, as well as system-endogenous assets comprising some fraction of other firms' equity and liability, respectively. Following Fischer [2014], one can obtain no-arbitrage prices of equity and the recovery claims of liabilities as solutions of a fixed point problem, and hence obtain no-arbitrage prices of the `firm value' of each firm, which is the value of the firm's liability plus the firm's equity.
In a first step, we consider the two-firm case where explicit formulae for the no-arbitrage prices of the firm values are available (cf. Suzuki [2002]). Since firm values are derivatives of exogenous asset values, the distribution of firm values at maturity can be determined from the distribution of exogenous asset values. The Merton model and most of its known extensions do not account for the cross-ownership structure of the assets owned by the firm. Therefore the assumption of lognormally distributed exogenous assets leads to lognormally distributed firm values in such models, as the values of the liability and the equity add up to the exogenous asset's value (which has lognormal distribution by assumption). Our work therefore starts from lognormally distributed exogenous assets and reveals how cross-ownership, when correctly accounted for in the valuation process, affects the distribution of the firm value, which is not lognormal anymore. In a simulation study we examine the impact of several parameters (amount of cross-ownership of debt and equity, ratio of liabilities to expected exogenous assets value) on the differences between the distribution of firm values obtained from our model and correspondingly matched lognormal distributions. It becomes clear that the assumption of lognormally distributed firm values may lead to both over- and underestimation of the “true" firm values (within the cross-ownership model) and consequently of bankruptcy risk, too.
In a second step, the bankruptcy risk of one firm within the system is analyzed in more detail in a further simulation study, revealing that the correct incorporation of cross-ownership in the valuation procedure is the more important, the tighter the cross-ownership structure between the two firms. Furthermore, depending on the considered type of cross-ownership (debt or equity), the assumption of lognormally distributed firm values is likely to result in an over- resp. underestimation of the actual probability of default. In a similar vein, we consider the Value-at-Risk (VaR) of a firm in the system, which we calculate as the negative α-quantile of the firm value at maturity minus the firm's risk neutral price in t=0, i.e. we consider the (1-α)100%-VaR of the change in firm value. If we let the cross-ownership fractions (i.e. the fraction that one firm holds of another firm's debt or equity) converge to 1 (which is the supremum of the possible values that cross-ownership fractions can take), we can prove that in a system of two firms, the lognormal model will over- resp. underestimate both univariate and bivariate probabilities of default under cross-ownership of debt only resp. cross-ownership of equity only. Furthermore, we provide a formula that allows us to check for an arbitrary scenario of cross-ownership and any non-negative distribution of exogenous assets whether the approximating lognormal model will over- or underestimate the related probability of default of a firm. In particular, any given non-negative distribution of exogenous asset values (non-degenerate in a certain sense) can be transformed into a new, “extreme" distribution of exogenous assets yielding such a low or high actual probability of default that the approximating lognormal model will over- and underestimate this risk, respectively.
After this analysis of the univariate distribution of firm values under cross-ownership in a system of two firms with bivariately lognormally distributed exogenous asset values, we consider the copula of these firm values as a distribution-free measure of the dependency between these firm values. Without cross-ownership, this copula would be the Gaussian copula. Under cross-ownership, we especially consider the behaviour of the copula of firm values in the lower left and upper right corner of the unit square, and depending on the type of cross-ownership and the considered corner, we either obtain error bounds as to how good the copula of firm values under cross-ownership can be approximated with the Gaussian copula, or we see that the copula of firm values can be written as the copula of two linear combinations of exogenous asset values (note that these linear combinations are not lognormally distributed). These insights serve as a basis for our analysis of the tail dependence coefficient of firm values under cross-ownership. Under cross-ownership of debt only, firm values remain upper tail independent, whereas they become perfectly lower tail dependent if the correlation between exogenous asset values exceeds a certain positive threshold, which does not depend on the exact level of cross-ownership. Under cross-ownership of equity only, the situation is reverse in that firm values always remain lower tail independent, but upper tail independence is preserved if and only if the right tail behaviour of both firms’ values is determined by the right tail behaviour of the firms’ own exogenous asset value instead of the respective other firm’s exogenous asset value.
Next, we return to systems of n≥2 firms and analyze sensitivities of no-arbitrage prices of equity and the recovery claims of liabilities with respect to the model parameters. In the literature, such sensitivities are provided with respect to exogenous asset values by Gouriéroux et al. [2012], and we extend the existing results by considering how these no-arbitrage prices depend on the cross-ownership fractions and the level of liabilities. For the former, we can show that all prices are non-decreasing in any cross-ownership fraction in the model, and by use of a version of the Implicit Function Theorem we can also determine exact derivatives. For the latter, we show that the recovery value of debt and the equity value of a firm are non-decreasing and non-increasing in the firm's nominal level of liabilities, respectively, but the firm value is in general not monotone in the firm's level of liabilities. Furthermore, no-arbitrage prices of equity and the recovery claims of liabilities of a firm are in general non-monotone in the nominal level of liabilities of other firms in the system. If we confine ourselves to one type of cross-ownership (i.e. debt or equity), we can derive more precise relationships. All the results can be transferred to risk-neutral prices before maturity.
Finally, following Gouriéroux et al. [2012] and as a kind of extension to the above sensitivity results, we consider how immediate changes in exogenous asset values of one or more firms at maturity affect the financial health of a system of n initially solvent firms. We start with some theoretical considerations on what we call the contagion effect, namely the change in the endogenous asset value of a firm caused by shocks on the exogenous assets of firms within the system. For the two-firm case, an explicit formula is available, making clear that in general (and in particular under cross-ownership of equity only), the effect of contagion can be positive as well as negative, i.e. it can both, mitigate and exacerbate the change in the exogenous asset value of a firm. On the other hand, we cannot generally say that a tighter cross-ownership structure leads to bigger absolute contagion effects. Under cross-ownership of debt only, firms cannot profit from positive shocks beyond the direct effect on exogenous assets, as the contagion effect is always non-positive. Next, we are concerned with spillover effects of negative shocks on a subset of firms to other firms in the system (experiencing non-negative shocks themselves), driving them into default due to large losses in their endogenous asset values. Extending the results of Glasserman and Young [2015], we provide a necessary condition for the shock to cause such an event. This also yields an upper bound for the probability of such an event. We further investigate how the stability of a system of firms exposed to multiple shocks depends on the model parameters in a simulation study. In doing so, we consider three network types (incomplete, core-periphery and ring network) with simultaneous shocks on some of the firms and wiping out a certain percentage of their exogenous assets. Then we analyze for all three types of cross-ownership (debt only, equity only, both debt and equity) how the shock intensity, the shock size, and network parameters as the number of links in the network and the proportion of a firm's debt or equity held within the system of firms influences several output parameters, comprising the total number of defaults and the relative loss in the sum of firm values, among others. Comparing our results to the studies of Nier et al. [2007], Gai and Kapadia [2010] and Elliott et al. [2014], we can only partly confirm their results with respect to the number of defaults. We conclude our work with a theoretical comparison of the complete network (where each firm holds a part of any other firm) and the ring network with respect to the number of defaults caused by a shock on a single firm, as it is done by Allen and Gale [2000]. In line with the literature, we find that under cross-ownership of debt only, complete networks are “robust yet fragile" [Gai and Kapadia, 2010] in that moderate shocks can be completely withstood or drive the firm directly hit by the shock in default, but as soon as the shock exceeds a certain size, all firms are simultaneously in default. In contrast to that, firms default one by one in the ring network, with the first “contagious default" (i.e. a default of a firm not directly hit by the shock) already occurs for smaller shock sizes than under the complete network.
The first goal of this thesis is to generalize Loewner's famous differential equation to multiply connected domains. The resulting differential equations are known as Komatu--Loewner differential equations. We discuss Komatu--Loewner equations for canonical domains (circular slit disks, circular slit annuli and parallel slit half-planes). Additionally, we give a generalisation to several slits and discuss parametrisations that lead to constant coefficients. Moreover, we compare Komatu--Loewner equations with several slits to single slit Loewner equations.
Finally we generalise Komatu--Loewner equations to hulls satisfying a local growth property.
This thesis deals with the hp-finite element method (FEM) for linear quadratic optimal control problems. Here, a tracking type functional with control costs as regularization shall be minimized subject to an elliptic partial differential equation. In the presence of control constraints, the first order necessary conditions, which are typically used to find optimal solutions numerically, can be formulated as a semi-smooth projection formula. Consequently, optimal solutions may be non-smooth as well. The hp-discretization technique considers this fact and approximates rough functions on fine meshes while using higher order finite elements on domains where the solution is smooth.
The first main achievement of this thesis is the successful application of hp-FEM to two related problem classes: Neumann boundary and interface control problems. They are solved with an a-priori refinement strategy called boundary concentrated (bc) FEM and interface concentrated (ic) FEM, respectively. These strategies generate grids that are heavily refined towards the boundary or interface. We construct an elementwise interpolant that allows to prove algebraic decay of the approximation error for both techniques. Additionally, a detailed analysis of global and local regularity of solutions, which is critical for the speed of convergence, is included. Since the bc- and ic-FEM retain small polynomial degrees for elements touching the boundary and interface, respectively, we are able to deduce novel error estimates in the L2- and L∞-norm. The latter allows an a-priori strategy for updating the regularization parameter in the objective functional to solve bang-bang problems.
Furthermore, we apply the traditional idea of the hp-FEM, i.e., grading the mesh geometrically towards vertices of the domain, for solving optimal control problems (vc-FEM). In doing so, we obtain exponential convergence with respect to the number of unknowns. This is proved with a regularity result in countably normed spaces for the variables of the coupled optimality system.
The second main achievement of this thesis is the development of a fully adaptive hp-interior point method that can solve problems with distributed or Neumann control. The underlying barrier problem yields a non-linear optimality system, which poses a numerical challenge: the numerically stable evaluation of integrals over possibly singular functions in higher order elements. We successfully overcome this difficulty by monitoring the control variable at the integration points and enforcing feasibility in an additional smoothing step. In this work, we prove convergence of an interior point method with smoothing step and derive a-posteriori error estimators. The adaptive mesh refinement is based on the expansion of the solution in a Legendre series. The decay of the coefficients serves as an indicator for smoothness that guides between h- and p-refinement.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated.
The subject of this thesis is the rigorous passage from discrete systems to continuum models via variational methods.
The first part of this work studies a discrete model describing a one-dimensional chain of atoms with finite range interactions of Lennard-Jones type. We derive an expansion of the ground state energy using \(\Gamma\)-convergence. In particular, we show that a variant of the Cauchy-Born rule holds true for the model under consideration. We exploit this observation to derive boundary layer energies due to asymmetries of the lattice at the boundary or at cracks of the specimen. Hereby we extend several results obtained previously for models involving only nearest and next-to-nearest neighbour interactions by Braides and Cicalese and Scardia, Schlömerkemper and Zanini.
The second part of this thesis is devoted to the analysis of a quasi-continuum (QC) method. To this end, we consider the discrete model studied in the first part of this thesis as the fully atomistic model problem and construct an approximation based on a QC method. We show that in an elastic setting the expansion by \(\Gamma\)-convergence of the fully atomistic energy and its QC approximation coincide. In the case of fracture, we show that this is not true in general. In the case of only nearest and next-to-nearest neighbour interactions, we give sufficient conditions on the QC approximation such that, also in case of fracture, the minimal energies of the fully atomistic energy and its approximation coincide in the limit.
The thesis ’Hurwitz’s Complex Continued Fractions - A Historical Approach and Modern Perspectives.’ deals with two branches of mathematics: Number Theory and History of Mathematics. On the first glimpse this might be unexpected, however, on the second view this is a very fruitful combination. Doing research in mathematics, it turns out to be very helpful to be aware of the beginnings and development of the corresponding subject.
In the case of Complex Continued Fractions the origins can easily be traced back to the end of the 19th century (see [Perron, 1954, vl. 1, Ch. 46]). One of their godfathers had been the famous mathematician Adolf Hurwitz. During the study of his transformation from real to complex continued fraction theory [Hurwitz, 1888], our attention was arrested by the article ’Ueber eine besondere Art der Kettenbruch-Entwicklung complexer Grössen’ [Hurwitz, 1895] from 1895 of an author called J. Hurwitz. We were not only surprised when we found out that he was the elder unknown brother Julius, furthermore, Julius Hurwitz introduced a complex continued fraction that also appeared (unmentioned) in an ergodic theoretical work from 1985 [Tanaka, 1985]. Those observations formed the Basis of our main research questions:
What is the historical background of Adolf and Julius Hurwitz and their mathematical studies? and What modern perspectives are provided by their complex continued fraction expansions?
In this work we examine complex continued fractions from various viewpoints. After a brief introduction on real continued fractions, we firstly devote ourselves to the lives of the brothers Adolf and Julius Hurwitz. Two excursions on selected historical aspects in respect to their work complete this historical chapter. In the sequel we shed light on Hurwitz’s, Adolf’s as well as Julius’, approaches to complex continued fraction expansions.
Correspondingly, in the following chapter we take a more modern perspective. Highlights are an ergodic theoretical result, namely a variation on the Döblin-Lenstra Conjecture [Bosma et al., 1983], as well as a result on transcendental numbers in tradition of Roth’s theorem [Roth, 1955]. In two subsequent chapters we are concernced with arithmetical properties of complex continued fractions. Firstly, an analogue to Marshall Hall’s Theorem from 1947 [Hall, 1947] on sums of continued fractions is derived. Secondly, a general approach on new types of continued fractions is presented building on the structural properties of lattices. Finally, in the last chapter we take up this approach and obtain an upper bound for the approximation quality of diophantine approximations by quotients of lattice points in the complex plane generalizing a method of Hermann Minkowski, improved by Hilde Gintner [Gintner, 1936], based on ideas from geometry of numbers.
The investigation of interacting multi-agent models is a new field of mathematical research with application to the study of behavior in groups of animals or community of people. One interesting feature of multi-agent systems is collective behavior. From the mathematical point of view, one of the challenging issues considering with these dynamical models is development of control mechanisms that are able to influence the time evolution of these systems.
In this thesis, we focus on the study of controllability, stabilization and optimal control problems for multi-agent systems considering three models as follows: The first one is the Hegselmann Krause opinion formation (HK) model. The HK dynamics describes how individuals' opinions are changed by the interaction with others taking place in a bounded domain of confidence. The study of this model focuses on determining feedback controls in order to drive the agents' opinions to reach a desired agreement. The second model is the Heider social balance (HB) model. The HB dynamics explains the evolution of relationships in a social network. One purpose of studying this system is the construction of control function in oder to steer the relationship to reach a friendship state. The third model that we discuss is a flocking model describing collective motion observed in biological systems. The flocking model under consideration includes self-propelling, friction, attraction, repulsion, and alignment features. We investigate a control for steering the flocking system to track a desired trajectory. Common to all these systems is our strategy to add a leader agent that interacts with all other members of the system and includes the control mechanism.
Our control through leadership approach is developed using classical theoretical control methods and a model predictive control (MPC) scheme. To apply the former method, for each model the stability of the corresponding linearized system near consensus is investigated. Further, local controllability is examined. However, only in the
Hegselmann-Krause opinion formation model, the feedback control is determined in order to steer agents' opinions to globally converge to a desired agreement. The MPC approach is an optimal control strategy based on numerical optimization. To apply the MPC scheme, optimal control problems for each model are formulated where the objective functions are different depending on the desired objective of the problem. The first-oder necessary optimality conditions for each problem are presented. Moreover for the numerical treatment, a sequence of open-loop discrete optimality systems is solved by accurate Runge-Kutta schemes, and in the optimization procedure, a nonlinear conjugate gradient solver is implemented. Finally, numerical experiments are performed to investigate the properties of the multi-agent models and demonstrate the ability of the proposed control strategies to drive multi-agent systems to attain a desired consensus and to track a given trajectory.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
Several aspects of the stability analysis of large-scale discrete-time systems are considered. An important feature is that the right-hand side does not have have to be continuous.
In particular, constructive approaches to compute Lyapunov functions are derived and applied to several system classes.
For large-scale systems, which are considered as an interconnection of smaller subsystems, we derive a new class of small-gain results, which do not require the subsystems to be robust in some sense. Moreover, we do not only study sufficiency of the conditions, but rather state an assumption under which these conditions are also necessary.
Moreover, gain construction methods are derived for several types of aggregation, quantifying how large a prescribed set of interconnection gains can be in order that a small-gain condition holds.
Analysis of discretization schemes for Fokker-Planck equations and related optimality systems
(2015)
The Fokker-Planck (FP) equation is a fundamental model in thermodynamic kinetic theories and
statistical mechanics.
In general, the FP equation appears in a number of different fields in natural sciences, for instance in solid-state physics, quantum optics, chemical physics, theoretical biology, and circuit theory. These equations also provide a powerful mean to define
robust control strategies for random models. The FP equations are partial differential equations (PDE) describing the time evolution of the probability density function (PDF) of stochastic processes.
These equations are of different types depending on the underlying stochastic process.
In particular, they are parabolic PDEs for the PDF of Ito processes, and hyperbolic PDEs for piecewise deterministic processes (PDP).
A fundamental axiom of probability calculus requires that the integral of the PDF over all the allowable state space must be equal to one, for all time. Therefore, for the purpose of accurate numerical simulation, a discretized FP equation must guarantee conservativeness of the total probability. Furthermore, since the
solution of the FP equation represents a probability density, any numerical scheme that approximates the FP equation is required to guarantee the positivity of the solution. In addition, an approximation scheme must be accurate and stable.
For these purposes, for parabolic FP equations on bounded domains, we investigate the Chang-Cooper (CC) scheme for space discretization and first- and
second-order backward time differencing. We prove that the resulting
space-time discretization schemes are accurate, conditionally stable, conservative, and preserve positivity.
Further, we discuss a finite difference discretization for the FP system corresponding to a PDP process in a bounded domain.
Next, we discuss FP equations in unbounded domains.
In this case, finite-difference or finite-element methods cannot be applied. By employing a suitable set of basis functions, spectral methods allow to treat unbounded domains. Since FP solutions decay exponentially at infinity, we consider Hermite functions as basis functions, which are Hermite polynomials multiplied by a Gaussian.
To this end, the Hermite spectral discretization is applied
to two different FP equations; the parabolic PDE corresponding to Ito processes, and the system of hyperbolic PDEs corresponding to a PDP process. The resulting discretized schemes are analyzed. Stability and spectral accuracy of the Hermite spectral discretization of the FP problems is proved. Furthermore, we investigate the conservativity of the solutions of FP equations discretized with the Hermite spectral scheme.
In the last part of this thesis, we discuss optimal control problems governed by FP equations on the characterization of their solution by optimality systems. We then investigate the Hermite spectral discretization of FP optimality systems in unbounded domains.
Within the framework of Hermite discretization, we obtain sparse-band systems of ordinary differential equations. We analyze the accuracy of the discretization schemes by showing spectral convergence in approximating the state, the adjoint, and the control variables that appear in the FP optimality systems.
To validate our theoretical estimates, we present results of numerical experiments.
In this thesis we study smoothness properties of primal and dual gap functions for generalized Nash equilibrium problems (GNEPs) and finite-dimensional quasi-variational inequalities (QVIs). These gap functions are optimal value functions of primal and dual reformulations of a corresponding GNEP or QVI as a constrained or unconstrained optimization problem. Depending on the problem type, the primal reformulation uses regularized Nikaido-Isoda or regularized gap function approaches. For player convex GNEPs and QVIs of the so-called generalized `moving set' type the respective primal gap functions are continuously differentiable. In general, however, these primal gap functions are nonsmooth for both problems. Hence, we investigate their continuity and differentiability properties under suitable assumptions. Here, our main result states that, apart from special cases, all locally minimal points of the primal reformulations are points of differentiability of the corresponding primal gap function.
Furthermore, we develop dual gap functions for a class of GNEPs and QVIs and ensuing unconstrained optimization reformulations of these problems based on an idea by Dietrich (``A smooth dual gap function solution to a class of quasivariational inequalities'', Journal of Mathematical Analysis and Applications 235, 1999, pp. 380--393). For this purpose we rewrite the primal gap functions as a difference of two strongly convex functions and employ the Toland-Singer duality theory. The resulting dual gap functions are continuously differentiable and, under suitable assumptions, have piecewise smooth gradients. Our theoretical analysis is complemented by numerical experiments. The solution methods employed make use of the first-order information established by the aforementioned theoretical investigations.
In attempting to solve the regular inverse Galois problem for arbitrary subfields K of C (particularly for K=Q), a very important result by Fried and Völklein reduces the existence of regular Galois extensions F|K(t) with Galois group G to the existence of K-rational points on components of certain moduli spaces for families of covers of the projective line, known as Hurwitz spaces.
In some cases, the existence of rational points on Hurwitz spaces has been proven by theoretical criteria. In general, however, the question whether a given Hurwitz space has any rational point remains a very difficult problem. In concrete cases, it may be tackled by an explicit computation of a Hurwitz space and the corresponding family of covers.
The aim of this work is to collect and expand on the various techniques that may be used to solve such computational problems and apply them to tackle several families of Galois theoretic interest. In particular, in Chapter 5, we compute explicit curve equations for Hurwitz spaces for certain families of \(M_{24}\) and \(M_{23}\).
These are (to my knowledge) the first examples of explicitly computed Hurwitz spaces of such high genus. They might be used to realize \(M_{23}\) as a regular Galois group over Q if one manages to find suitable points on them.
Apart from the calculation of explicit algebraic equations, we produce complex approximations for polynomials with genus zero ramification of several different ramification types in \(M_{24}\) and \(M_{23}\). These may be used as starting points for similar computations.
The main motivation for these computations is the fact that \(M_{23}\) is currently the only remaining sporadic group that is not known to occur as a Galois group over Q.
We also compute the first explicit polynomials with Galois groups \(G=P\Gamma L_3(4), PGL_3(4), PSL_3(4)\) and \(PSL_5(2)\) over Q(t).
Special attention will be given to reality questions. As an application we compute the first examples of totally real polynomials with Galois groups \(PGL_2(11)\) and \(PSL_3(3)\) over Q.
As a suggestion for further research, we describe an explicit algorithmic version of "Algebraic Patching", following the theory described e.g. by M. Jarden. This could be used to conquer some problems regarding families of covers of genus g>0.
Finally, we present explicit Magma implementations for several of the most important algorithms involved in our computations.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
The work at hand studies problems from Loewner theory and is divided into two parts:
In part 1 (chapter 2) we present the basic notions of Loewner theory. Here we use a modern form which was developed by F. Bracci, M. Contreras, S. Díaz-Madrigal et al. and which can be applied to certain higher dimensional complex manifolds.
We look at two domains in more detail: the Euclidean unit ball and the polydisc. Here we consider two classes of biholomorphic mappings which were introduced by T. Poreda and G. Kohr as generalizations of the class S.
We prove a conjecture of G. Kohr about support points of these classes. The proof relies on the observation that the classes describe so called Runge domains, which follows from a result by L. Arosio, F. Bracci and E. F. Wold.
Furthermore, we prove a conjecture of G. Kohr about support points of a class of biholomorphic mappings that comes from applying the Roper-Suffridge extension operator to the class S.
In part 2 (chapter 3) we consider one special Loewner equation: the chordal multiple-slit equation in the upper half-plane.
After describing basic properties of this equation we look at the problem, whether one can choose the coefficient functions in this equation to be constant. D. Prokhorov proved this statement under the assumption that the slits are piecewise analytic. We use a completely different idea to solve the problem in its general form.
As the Loewner equation with constant coefficients holds everywhere (and not just almost everywhere), this result generalizes Loewner’s original idea to the multiple-slit case.
Moreover, we consider the following problems:
• The “simple-curve problem” asks which driving functions describe the growth of simple curves (in contrast to curves that touch itself). We discuss necessary and sufficient conditions, generalize a theorem of J. Lind, D. Marshall and S. Rohde to the multiple-slit equation and we give an example of a set of driving functions which generate simple curves because of a certain self-similarity property.
• We discuss properties of driving functions that generate slits which enclose a given angle with the real axis.
• A theorem by O. Roth gives an explicit description of the reachable set of one point in the radial Loewner equation. We prove the analog for the chordal equation.
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
Argumentation and proof have played a fundamental role in mathematics education in recent years. The author of this dissertation would like to investigate the development of the proving process within a dynamic geometry system in order to support tertiary students understanding the proving process. The strengths of this dynamic system stimulate students to formulate conjectures and produce arguments during the proving process. Through empirical research, we classified different levels of proving and proposed a methodological model for proving. This methodological model makes a contribution to improve students’ levels of proving and develop their dynamic visual thinking. We used Toulmin model of argumentation as a theoretical model to analyze the relationship between argumentation and proof. This research also offers some possible explanation so as to why students have cognitive difficulties in constructing proofs and provides mathematics educators with a deeper understanding on the proving process within a dynamic geometry system.
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
In this thesis, time-optimal control of the bi-steerable robot is addressed. The bi-steerable robot, a vehicle with two independently steerable axles, is a complex nonholonomic system with applications in many areas of land-based robotics. Motion planning and optimal control are challenging tasks for this system, since standard control schemes do not apply. The model of the bi-steerable robot considered here is a reduced kinematic model with the driving velocity and the steering angles of the front and rear axle as inputs. The steering angles of the two axles can be set independently from each other. The reduced kinematic model is a control system with affine and non-affine inputs, as the driving velocity enters the system linearly, whereas the steering angles enter nonlinearly. In this work, a new approach to solve the time-optimal control problem for the bi-steerable robot is presented. In contrast to most standard methods for time-optimal control, our approach does not exclusively rely on discretization and purely numerical methods. Instead, the Pontryagin Maximum Principle is used to characterize candidates for time-optimal solutions. The resultant boundary value problem is solved by optimization to obtain solutions to the path planning problem over a given time horizon. The time horizon is decreased and the path planning is iterated to approximate a time-optimal solution. An optimality condition is introduced which depends on the number of cusps, i.e., reversals of the driving direction of the robot. This optimality condition allows to single out non-optimal solutions with too many cusps. In general, our approach only gives approximations of time-optimal solutions, since only normal regular extremals are considered as solutions to the path planning problem, and the path planning is terminated when an extremal with minimal number of cusps is found. However, for most desired configurations, normal regular extremals with the minimal number of cusps provide time-optimal solutions for the bi-steerable robot. The convergence of the approach is analyzed and its probabilistic completeness is shown. Moreover, simulation results on time-optimal solutions for the bi-steerable robot are presented.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
On the Fragility Index
(2011)
The Fragility Index captures the amount of risk in a stochastic system of arbitrary dimension. Its main mathematical tool is the asymptotic distribution of exceedance counts within the system which can be derived by use of multivariate extreme value theory. Thereby the basic assumption is that data comes from a distribution which lies in the domain of attraction of a multivariate extreme value distribution. The Fragility Index itself and its extension can serve as a quantitative measure for tail dependence in arbitrary dimensions. It is linked to the well known extremal index for stochastic processes as well the extremal coefficient of an extreme value distribution.
In the verification of positive Harris recurrence of multiclass queueing networks the stability analysis for the class of fluid networks is of vital interest. This thesis addresses stability of fluid networks from a Lyapunov point of view. In particular, the focus is on converse Lyapunov theorems. To gain an unified approach the considerations are based on generic properties that fluid networks under widely used disciplines have in common. It is shown that the class of closed generic fluid network models (closed GFNs) is too wide to provide a reasonable Lyapunov theory. To overcome this fact the class of strict generic fluid network models (strict GFNs) is introduced. In this class it is required that closed GFNs satisfy additionally a concatenation and a lower semicontinuity condition. We show that for strict GFNs a converse Lyapunov theorem is true which provides a continuous Lyapunov function. Moreover, it is shown that for strict GFNs satisfying a trajectory estimate a smooth converse Lyapunov theorem holds. To see that widely used queueing disciplines fulfill the additional conditions, fluid networks are considered from a differential inclusions perspective. Within this approach it turns out that fluid networks under general work-conserving, priority and proportional processor-sharing disciplines define strict GFNs. Furthermore, we provide an alternative proof for the fact that the Markov process underlying a multiclass queueing network is positive Harris recurrent if the associate fluid network defining a strict GFN is stable. The proof explicitely uses the Lyapunov function admitted by the stable strict GFN. Also, the differential inclusions approach shows that first-in-first-out disciplines play a special role.
Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The subject of this thesis are mathematical programs with complementarity conditions (MPCC). At first, an economic example of this problem class is analyzed, the problem of effort maximization in asymmetric n-person contest games. While an analytical solution for this special problem could be derived, this is not possible in general for MPCCs. Therefore, optimality conditions which might be used for numerical approaches where considered next. More precisely, a Fritz-John result for MPCCs with stronger properties than those known so far was derived together with some new constraint qualifications and subsequently used to prove an exact penalty result. Finally, to solve MPCCs numerically, the so called relaxation approach was used. Besides improving the results for existing relaxation methods, a new relaxation with strong convergence properties was suggested and a numerical comparison of all methods based on the MacMPEC collection conducted.
In the following dissertation we consider three preconditioners of algebraic multigrid type, though they are defined for arbitrary prolongation and restriction operators, we consider them in more detail for the aggregation method. The strengthened Cauchy-Schwarz inequality and the resulting angle between the spaces will be our main interests. In this context we will introduce some modifications. For the problem of the one-dimensional convection we obtain perfect theoretical results. Although this is not the case for more complex problems, the numerical results we present will show that the modifications are also useful in these situation. Additionally, we will consider a symmetric problem in the energy norm and present a simple rule for algebraic aggregation.
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
Controllability Aspects of the Lindblad-Kossakowski Master Equation : A Lie-Theoretical Approach
(2009)
One main task, which is considerably important in many applications in quantum control, is to explore the possibilities of steering a quantum system from an initial state to a target state. This thesis focuses on fundamental control-theoretical issues of quantum dynamics described by the Lindblad-Kossakowski master equation which arises as a bilinear control system on some underlying real vector spaces, e.g controllability aspects and the structure of reachable sets. Based on Lie-algebraic methods from nonlinear control theory, the thesis presents a unified approach to control problems of finite dimensional closed and open quantum systems. In particular, a simplified treatment for controllability of closed quantum systems as well as new accessibility results for open quantum systems are obtained. The main tools to derive the results are the well-known classifications of all matrix Lie groups which act transitively on Grassmann manifolds, and respectively, on real vector spaces without the origin. It is also shown in this thesis that accessibiity of the Lindblad-Kossakowski master equation is a generic property. Moreover, based on the theoretical accessibility results, an algorithm is developed to decide when the Lindblad-Kossakowski master equation is accessible.
In the generalized Nash equilibrium problem not only the cost function of a player depends on the rival players' decisions, but also his constraints. This thesis presents different iterative methods for the numerical computation of a generalized Nash equilibrium, some of them globally, others locally superlinearly convergent. These methods are based on either reformulations of the generalized Nash equilibrium problem as an optimization problem, or on a fixed point formulation. The key tool for these reformulations is the Nikaido-Isoda function. Numerical results for various problem from the literature are given.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
A new class of optimization problems name 'mathematical programs with vanishing constraints (MPVCs)' is considered. MPVCs are on the one hand very challenging from a theoretical viewpoint, since standard constraint qualifications such as LICQ, MFCQ, or ACQ are most often violated, and hence, the Karush-Kuhn-Tucker conditions do not provide necessary optimality conditions off-hand. Thus, new CQs and the corresponding optimality conditions are investigated. On the other hand, MPVCs have important applications, e.g., in the field of topology optimization. Therefore, numerical algorithms for the solution of MPVCs are designed, investigated and tested for certain problems from truss-topology-optimization.
It is well known, that the least squares estimator performs poorly in the presence of multicollinearity. One way to overcome this problem is using biased estimators, e.g. ridge regression estimators. In this study an estimation procedure is proposed based on adding a small quantity omega on some or each regressor. The resulting biased estimator is described in dependence of omega and furthermore it is shown that its mean squared error is smaller than the one corresponding to the least squares estimator in the case of highly correlated regressors.
We discuss exceptional polynomials, i.e. polynomials over a finite field $k$ that induce bijections over infinitely many finite extensions of $k$. In the first chapters we give the theoretical background to characterize this class of polynomials with Galois theoretic means. This leads to the notion of arithmetic resp. geometric monodromy groups. In the remaining chapters we restrict our attention to polynomials with primitive affine arithmetic monodromy group. We first classify all exceptional polynomials with the fixed field of the affine kernel of the arithmetic monodromy group being of genus less or equal to 2. Next we show that every full affine group can be realized as the monodromy group of a polynomial. In the remaining chapters we classify affine polynomials of a given degree.
We investigate iterative numerical algorithms with shifts as nonlinear discrete-time control systems. Our approach is based on the interpretation of reachable sets as orbits of the system semigroup. In the first part we develop tools for the systematic analysis of the structure of reachable sets of general invertible discrete-time control systems. Therefore we merge classical concepts, such as geometric control theory, semigroup actions and semialgebraic geometry. Moreover, we introduce new concepts such as right divisible systems and the repelling phenomenon. In the second part we apply the semigroup approach to the investigation of concrete numerical iteration schemes. We extend the known results about the reachable sets of classical inverse iteration. Moreover, we investigate the structure of reachable sets and systemgroup orbits of inverse iteration on flag manifolds and Hessenberg varieties, rational iteration schemes, Richardson's method and linear control schemes. In particular we obtain necessary and sufficient conditions for controllability and the appearance of repelling phenomena. Furthermore, a new algorithm for solving linear equations (LQRES) is derived.
The incidence matrices of many combinatorial structures satisfy the so called rectangular rule, i.e., the scalar product of any two lines of the matrix is at most 1. We study a class of matrices with rectangular rule, the regular block matrices. Some regular block matrices are submatrices of incidence matrices of finite projective planes. Necessary and sufficient conditions are given for regular block matrices, to be submatrices of projective planes. Moreover, regular block matrices are related to another combinatorial structure, the symmetric configurations. In particular, it turns out, that we may conclude the existence of several symmetric configurations from the existence of a projective plane, using this relationship.
Many optimization problems for a smooth cost function f on a manifold M can be solved by determining the zeros of a vector field F; such as e.g. the gradient F of the cost function f. If F does not depend on additional parameters, numerous zero-finding techniques are available for this purpose. It is a natural generalization however, to consider time-dependent optimization problems that require the computation of time-varying zeros of time-dependent vector fields F(x,t). Such parametric optimization problems arise in many fields of applied mathematics, in particular path-following problems in robotics, recursive eigenvalue and singular value estimation in signal processing, as well as numerical linear algebra and inverse eigenvalue problems in control theory. In the literature, there are already some tracking algorithms for these tasks, but these do not always adequately respect the manifold structure. Hence, available tracking results can often be improved by implementing methods working directly on the manifold. Thus, intrinsic methods are of interests that evolve during the entire computation on the manifold. It is the task of this thesis, to develop such intrinsic zero finding methods. The main results of this thesis are as follows: - A new class of continuous and discrete tracking algorithms is proposed for computing zeros of time-varying vector fields on Riemannian manifolds. This was achieved by studying the newly introduced time-varying Newton Flow and the time-varying Newton Algorithm on Riemannian manifolds. - Convergence analysis is performed on arbitrary Riemannian manifolds. - Concretization of these results on submanifolds, including for a new class of algorithms via local parameterizations. - More specific results in Euclidean space are obtained by considering inexact and underdetermined time-varying Newton Flows. - Illustration of these newly introduced algorithms by examining time-varying tracking tasks in three application areas: Subspace analysis, matrix decompositions (in particular EVD and SVD) and computer vision.
A torsion free abelian group of finite rank is called almost completely decomposable if it has a completely decomposable subgroup of finite index. A p-local, p-reduced almost completely decomposable group of type (1,2) is briefly called a (1,2)-group. Almost completely decomposable groups can be represented by matrices over the ring Z/hZ, where h is the exponent of the regulator quotient. This particular choice of representation allows for a better investigation of the decomposability of the group. Arnold and Dugas showed in several of their works that (1,2)-groups with regulator quotient of exponent at least p^7 allow infinitely many isomorphism types of indecomposable groups. It is not known if the exponent 7 is minimal. In this dissertation, this problem is addressed.
This work studies the convergence of trajectories of gradient-like systems. In the first part of this work continuous-time gradient-like systems are examined. Results on the convergence of integral curves of gradient systems to single points of Lojasiewicz and Kurdyka are extended to a class of gradient-like vector fields and gradient-like differential inclusions. In the second part of this work discrete-time gradient-like optimization methods on manifolds are studied. Methods for smooth and for nonsmooth optimization problems are considered. For these methods some convergence results are proven. Additionally the optimization methods for nonsmooth cost functions are applied to sphere packing problems on adjoint orbits.