Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
- Stabilität (5)
- Differentialgleichung (4)
- Elliptische Differentialgleichung (4)
- Extremwerttheorie (4)
- MPEC (4)
- Nichtglatte Optimierung (4)
- Partielle Differentialgleichung (4)
- Pontryagin maximum principle (4)
- SAS <Programm> (4)
- SQH method (4)
- Zeitreihenanalyse (4)
- A-priori-Wissen (3)
- Copula (3)
- Differentialgeometrie (3)
- Eigenwert (3)
- Euler equations (3)
- Finite-Volumen-Methode (3)
- Fokker-Planck-Gleichung (3)
- Galerkin-Methode (3)
- Gaussian approximation (3)
- Kombinatorik (3)
- Konfidenzintervall (3)
- Ljapunov-Funktion (3)
- Magnetoelastizität (3)
- Monodromie (3)
- Nichtkonvexe Optimierung (3)
- Numerisches Verfahren (3)
- Pareto-Verteilung (3)
- Spieltheorie (3)
- Statistik (3)
- Variationsrechnung (3)
- Zetafunktion (3)
- count time series (3)
- extreme value theory (3)
- global convergence (3)
- mathematical modelling (3)
- multi-fluid mixture (3)
- nonsmooth optimization (3)
- proximal gradient method (3)
- Abelsche Gruppe (2)
- Analysis (2)
- Anaphylaxis (2)
- Angewandte Mathematik (2)
- Anwendungssoftware (2)
- Argumentation (2)
- Audit sampling (2)
- BGK approximation (2)
- Belyi map (2)
- Biholomorphe Abbildung (2)
- Binomialverteilung (2)
- Box-Jenkins-Verfahren (2)
- Box–Jenkins Program (2)
- Cayley graph (2)
- Code examples (2)
- Controllability (2)
- D-Norm (2)
- Darstellungsmatrix (2)
- Deformationsquantisierung (2)
- Dirichlet-Reihe (2)
- Extreme Value Theory (2)
- Finanzmathematik (2)
- Fokker–Planck equation (2)
- Frequency Domain (2)
- Funktionentheorie (2)
- Galois-Theorie (2)
- Gamma-Konvergenz (2)
- Generalized Nash Equilibrium Problem (2)
- Gleichverteilung (2)
- Grundvorstellung (2)
- HIV (2)
- Homogener Raum (2)
- Hurwitz-Raum (2)
- Innere-Punkte-Methode (2)
- Inzidenzmatrix (2)
- Kohomologie (2)
- Kompakte Lie-Gruppe (2)
- Komplementaritätsproblem (2)
- Kontrolltheorie (2)
- Konvergenz (2)
- Kopula <Mathematik> (2)
- Lie groups (2)
- Lineare Algebra (2)
- Ljapunov-Stabilitätstheorie (2)
- Loewner theory (2)
- Loewner-Theorie (2)
- Lyapunov functions (2)
- Magnetohydrodynamik (2)
- Mathematica <Programm> (2)
- Mathematische Modellierung (2)
- Mehrgitterverfahren (2)
- Mikromagnetismus (2)
- Newton Methods (2)
- Newton's method (2)
- Nichtlineare Kontrolltheorie (2)
- Nikaido-Isoda function (2)
- Numerical analysis (2)
- Numerische Mathematik (2)
- Numerische Strömungssimulation (2)
- Optimale Steuerung (2)
- Origami (2)
- PDE (2)
- Prediction interval (2)
- Prior information (2)
- Projektive Ebene (2)
- Regressionsanalyse (2)
- Regulator (2)
- Regulator <Mathematik> (2)
- Riemann zeta-function (2)
- Riemannsche Zetafunktion (2)
- SAS (2)
- Schlichte Funktion (2)
- Simulation (2)
- Stability (2)
- State-Space Models (2)
- Stochastischer Prozess (2)
- Systemisches Risiko (2)
- Testinstrument (2)
- Time Series Analysis (2)
- Time series analyses (2)
- Torsionsfreie Abelsche Gruppe (2)
- Verallgemeinertes Nash-Gleichgewichtsproblem (2)
- Weibull-Verteilung (2)
- Zahlentheorie (2)
- Zustandsraummodelle (2)
- augmented Lagrangian method (2)
- basic mental models (2)
- cardinality constraints (2)
- conservation laws (2)
- deformation quantization (2)
- estimation error (2)
- extreme value distribution (2)
- fast vollständig zerlegbare Gruppe (2)
- finite elements (2)
- generalized Pareto distribution (2)
- globale Konvergenz (2)
- incidence matrix (2)
- jump-diffusion processes (2)
- kinetic model (2)
- low Mach number (2)
- mathematics (2)
- mean curvature (2)
- multigrid (2)
- nonlinear systems (2)
- numerical analysis (2)
- optimal control theory (2)
- optimization (2)
- ordinary differential equations (2)
- pedagogical content knowledge (2)
- prime number (2)
- regulator (2)
- relaxation method (2)
- representing matrix (2)
- semismooth Newton method (2)
- stability (2)
- stability analysis (2)
- state constraints (2)
- stochastic processes (2)
- trabeculectomy (2)
- viral load (2)
- Überlagerung <Mathematik> (2)
- (0 (1)
- (approximate) functional equation (1)
- *-algebra (1)
- 1)-Matrix (1)
- 1)-matrix (1)
- 1-fach-Origami (1)
- 1655-1705> (1)
- 5-fluorouracil (1)
- ADMM (1)
- AIDS (1)
- Abbildungseigenschaften (1)
- Abelsche p-Gruppe (1)
- Abhängigkeitsmaß (1)
- Ableitung (1)
- Abstiegsverfahren (1)
- Abstoßungsphänomen (1)
- Acoustic equations (1)
- Adolf Hurwitz (1)
- Affine Skalierungsverfahren (1)
- Affinminimalfläche (1)
- Affinnormal (1)
- Aggregation (1)
- Algebraic Curves (1)
- Algebraic signal processing (1)
- Algebraische Kurve (1)
- Algebraische Signalverarbeitung (1)
- Algebraische Zahlentheorie (1)
- Algebraischer Funktionenkörper (1)
- Analogie (1)
- Analogiebildung (1)
- Analytische Funktion (1)
- Analytische Zahlentheorie (1)
- Angewandte Geowissenschaften (1)
- Angular Density (1)
- Anpassungstest (1)
- Approximation (1)
- Approximationstheorie (1)
- Arbitrary Lagrangian-Eulerian (1)
- Archimedean copula (1)
- Aspekte professioneller Kompetenz (1)
- Asymptotic Preserving (1)
- Asymptotic independence (1)
- Atmosphäre (1)
- Augmented Lagrangian (1)
- Augmented Lagrangian methods (1)
- Automorphismengruppe (1)
- Axiom (1)
- Axiomatisieren (1)
- B-Spline (1)
- Babuska Brezzi Bedingung (1)
- Babuska Brezzi condition (1)
- Banach-Raum (1)
- Bayesian approach (1)
- Bayesian inverse problems (1)
- Beatty sequence (1)
- Bedingte Unabhängigkeit (1)
- Belyi-Funktionen (1)
- Beobachter (1)
- Berechnung (1)
- Bernoulli (1)
- Bernoulli Raum (1)
- Bernoulli Space (1)
- Bernstein (1)
- Bernstein-type inequality (1)
- Berstein (1)
- Beweistheorie (1)
- Bi-Fidelity method (1)
- Bildrekonstruktion (1)
- Bilinear differential games (1)
- Black Scholes equation (1)
- Blinear Quantum Control Systems (1)
- Bloch's Principle (1)
- Blochsches Prinzip (1)
- Blockplan (1)
- Bloom setting (1)
- Box-Restriktionen (1)
- Bregman distance (1)
- Brittle fracture (1)
- Brockett (1)
- Brüder Hurwitz (1)
- Burgers-Gleichung (1)
- Butler group (1)
- Butlergruppe (1)
- CSF (1)
- CSIDH (1)
- Calculus of Variations (1)
- Calculus of variations (1)
- Caputo fractional derivative (1)
- Carbon dioxide capture (1)
- Cardinality Constraints (1)
- Carleson embedding theorem (1)
- Cartan's Theorem (1)
- Cauchy-Born rule (1)
- Central limit theorem under dependence (1)
- Charakteranalyse (1)
- Code Examples (1)
- Coisotropic reduction (1)
- Complex Continued Fractions (1)
- Complex Fluids (1)
- Composite optimization problems (1)
- Compressed Sensing (1)
- Computerunterstützter Unterricht (1)
- Confidence interval (1)
- Confidence intervals (1)
- Conformal Metrics (1)
- Conjugate function (1)
- Conjugate gradient method (1)
- Conservation Laws (1)
- Constrained optimization (1)
- Constraint-Programmierung (1)
- Continuous Sample Path (1)
- Convergence (1)
- Copula <Mathematik> (1)
- Counterparty Risk (1)
- Credibility interval (1)
- Curvature Equation (1)
- D-Norms (1)
- D-norm (1)
- DAT (1)
- DC optimization (1)
- DNA replication (1)
- Darstellung vonPseudo-Metriken (1)
- Data Exploration (1)
- Deformation (1)
- Deformationsgradient (1)
- Derivation (1)
- Deskriptive Statistik (1)
- Dichtefunktionalformalismus (1)
- Differential Games (1)
- Differentialgleichungssystem (1)
- Digitale Signalverarbeitung (1)
- Dimension reduction (1)
- Diophantine approximation (1)
- Dirichlet-Problem (1)
- Discontinuous Galerkin method (1)
- Discrete to continuum (1)
- Discrete-to-continuum limits (1)
- Diskrepanz (1)
- Double sensitization (1)
- Drug allergy (1)
- Drug reaction (1)
- Dual gap function (1)
- Dualität (1)
- Dualitätstheorie (1)
- Dynamic Geometry Environment (1)
- Dynamic representations (1)
- Dynamical Systems (1)
- Dynamical system (1)
- Dynamische Geometriesysteme (1)
- Dynamische Optimierung (1)
- Dynamische Repräsentation (1)
- Dynamisches System (1)
- E-Learning (1)
- EJMA-D-19-00287 (1)
- Eigenmode (1)
- Elastizität (1)
- Elliptic equations (1)
- Elliptische Kurve (1)
- Endliche Geometrie (1)
- Ensemble optimal control (1)
- Entropiebedingung (1)
- Entropielösung (1)
- Entropy admissibility condition (1)
- Epidemiologie (1)
- Epstein zeta-function (1)
- Epstein, Paul (1)
- Erhaltungsgleichungen (1)
- Estimation (1)
- Euler system (1)
- Eulersche Differentialgleichung (1)
- Euler–Bernoulli damped beam (1)
- Exact-controllability (1)
- Exceedance Stability (1)
- Existenz schwacher Lösungen (1)
- Existenz und Eindeutigkeit (1)
- Explicit Computation (1)
- Explizite Berechnung (1)
- Exponential smoothing (1)
- Exponential smoothing with covariates (1)
- Extremal–I–Verteilung (1)
- Extreme value copula (1)
- Extremwert (1)
- Extremwertregelung (1)
- Extremwertverteilung (1)
- Falten (1)
- Faltung (1)
- Faltungscode (1)
- Fast vollständig zerlegbare Gruppe (1)
- Fermentation (1)
- Festkörper (1)
- Field sting (1)
- Filter-SQPEC Verfahren (1)
- Financial Networks (1)
- Finanzielle Netzwerke (1)
- Finite Elemente (1)
- Finite Elemente Methode (1)
- Finite support distributions (1)
- Firmwert (1)
- Fixpunktsatz (1)
- Flachfaltbarkeit (1)
- Flat-foldability (1)
- Fluid (1)
- Fluid-Partikel-Strömung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Fluid-structure interaction (1)
- Fluidnetzwerk (1)
- Flüssigkeit (1)
- Flüssigkristall (1)
- Fokalmannigfaltigkeit (1)
- Fokker-Planck (1)
- Fokker-Planck optimality systems (1)
- Formoptimierung (1)
- Fragility Index (1)
- Fragilitätsindex (1)
- Freies Randwertproblem (1)
- Fréchet spaces (1)
- Fréchet-Raum (1)
- Frécheträume (1)
- Function Fields (1)
- Functional differential equations (1)
- Functions with Primitive (1)
- Funktion von beschränkter Variation (1)
- Funktionen mit Stammfunktion (1)
- Funktionenkörper (1)
- GPD (1)
- GPD-Flow (1)
- Galois theory (1)
- Galois-Erweiterung (1)
- Galois-Feld (1)
- Galoistheorie (1)
- Gamma-convergence (1)
- Gasgemisch (1)
- Gebäude (1)
- Generalized Nash Equilibrium (1)
- Generalized Nash equilibrium (1)
- Generalized Pareto Distribution (1)
- Generalized Pareto copula (1)
- GeoGebra (1)
- Geometric constraints (1)
- Geometrie (1)
- Geschaltete Systeme (1)
- Gesetz der kleinen Zahlen (1)
- Gestaltoptimierung (1)
- Gewöhnliche Differentialgleichung (1)
- Gewöhnliche Differentialgleichungen (1)
- Gleichmäßige Konvergenz (1)
- Gleichungssysteme (1)
- Globale Analysis (1)
- Glättungsverfahren (1)
- Goodness-of-Fit Test (1)
- Gram points (1)
- Gram’s law (1)
- Graph (1)
- Graph eigenvalues (1)
- Graph products (1)
- Graph spectrum (1)
- Graphnullity (1)
- Grassmann Manifold (1)
- Grassmann-Mannigfaltigkeit (1)
- Gravitationsfeld (1)
- Guignard CQ (1)
- Gumbel-Verteilung (1)
- HAND (1)
- HDL (1)
- HIV infections (1)
- Hamilton Systeme (1)
- Hamilton Sytstems (1)
- Hamilton-Jacobi-Differentialgleichung (1)
- Hautus test (1)
- Hecke L-functions (1)
- Hecke eigenforms (1)
- Hierarchische Matrix (1)
- High-frequency data (1)
- Hilbert-Raum (1)
- Hilfe-System (1)
- Hochschuldidaktik (1)
- Hochschule+Lehre (1)
- Homicidal Chauffeur game (1)
- Homogenisierung <Mathematik> (1)
- Homologische Algebra (1)
- Honey bee (1)
- Hurwitz spaces (1)
- Hurwitz zeta function (1)
- Hybrid Dynamical Systems (1)
- Hybridsystem (1)
- Hydrodynamische Grenzwerte (1)
- Hymenoptera venom (1)
- Hyperbolic Partial Differential Equations (1)
- Hyperbolische Differentialgleichung (1)
- Hypertranscendence (1)
- Hüllenbildung (1)
- IMEX scheme (1)
- Ignorance (1)
- Ignoranz (1)
- Image Registration (1)
- Immunotherapy (1)
- Incompressibility (1)
- Infinite Optimierung (1)
- Inkompressibilität (1)
- Innere-Punkte-Verfahren (1)
- Integral graph (1)
- Integralgleichung (1)
- Integrodifferentialgleichung (1)
- Interactive Help System (1)
- Interconnection (1)
- Invarianter Unterraum (1)
- Inverse Iteration (1)
- Isomorphie (1)
- Isomorphieklasse (1)
- Isoparametrische Hyperfläche (1)
- Jacobi-Eigenwert-Verfahren (1)
- Jacobi-type eigenvalue methods (1)
- Jacobi-ähnliches Verfahren (1)
- Jacobsthal function (1)
- Jakob <Mathematiker (1)
- Julia line (1)
- Julius Hurwitz (1)
- Kanzow, C. Y. Cui, J.-S. Pang: “Modern Nonconvex Nondifferentiable Optimization” (1)
- Kapitalverflechtung (1)
- Kapitalverflechtungen (1)
- Karush-Kuhn-Tucker-Bedingungen (1)
- Kegelgebiet (1)
- Keller–Segel model (1)
- Kettenbruch (1)
- Klassifikation (1)
- Kloosterman sum (1)
- Koenigs function (1)
- Kollinearität (1)
- Kombinatorische Optimierung (1)
- Kombinatorische Zahlentheorie (1)
- Komplexe Flüssigkeiten (1)
- Komprimierte Abtastung (1)
- Kondition <Mathematik> (1)
- Konforme Abbildungen (1)
- Konforme Metrik (1)
- Konjugierte-Gradienten-Methode (1)
- Konstruktionsmethoden (1)
- Kontinuitätsgleichung (1)
- Kontinuumsmechanik (1)
- Kontrollsystem (1)
- Konvergenz bei quadratischem Eigenwertproblem (1)
- Konvexe Analysis (1)
- Korrekt gestelltes Problem (1)
- Korrelation (1)
- Krylow matrix (1)
- Kryptologie (1)
- Kurdyka--{\L}ojasiewicz property (1)
- LPEC (1)
- Ladyzhenskaya Konstante (1)
- Ladyzhenskaya constant (1)
- Lagrange-Methode (1)
- Landau type theorem (1)
- Langschrittmethoden (1)
- Laplace approximation (1)
- Laser (1)
- Lasersimulation (1)
- Least Action Principle (1)
- Least squares estimation (1)
- Lehrerbildung (1)
- Lennard-Jones-Potenzial (1)
- Lerch zeta function (1)
- Lie n-algebroids (1)
- Lie-Gruppe (1)
- Liegruppen (1)
- Lindblad-Kossakowski Master Equation (1)
- Lineare Funktionalanalysis (1)
- Lineare Regression (1)
- Linearer Operator (1)
- Lineares System (1)
- Liouville and transport equations (1)
- Local Lipschitz continuity (1)
- Local rings (1)
- Lotka-Volterra models (1)
- Lyapunov Funktion (1)
- Lyapunov Stability (1)
- Lückenreihe (1)
- Lückenreihen (1)
- M-Stationär (1)
- M-stationarity (1)
- MHD (1)
- MIND estimator (1)
- MPCC (1)
- MPVC (1)
- MSC 11M35 (1)
- Machzahl (1)
- Magnetic Resonance Imaging (1)
- Magnetoelasticity (1)
- Magnetohydrodynamics (1)
- Magnetoviscoelastic Fluids (1)
- Magnetoviskoelastische Flüsse (1)
- Mapping Properties (1)
- Martensit (1)
- Master-Gleichung (1)
- Mastocytosis (1)
- Mathematical modeling (1)
- Mathematikdidaktik (1)
- Mathematiklernen (1)
- Mathematisches Modell (1)
- Matrix (1)
- Matrixpolynom (1)
- Matrizenpolynom (1)
- Matrizenzerlegung (1)
- Maximal (1)
- Maximale (1)
- Maximum Dissipation Principle (1)
- Maße für Quantenverschränkung (1)
- Medical image reconstruction (1)
- Mehragentensystem (1)
- Mehrdimensionale Signalverarbeitung (1)
- Mehrgitter (1)
- Mehrskalenmodell (1)
- Metakognition (1)
- Metrologie (1)
- Minimal surfaces (1)
- Minimalfläche (1)
- Minimalflächen (1)
- Minimalkurven (1)
- Minimizing movements (1)
- Minimum Information Probability Distribution (1)
- Minimum Information Wahrscheinlichkeitsverteilung (1)
- Mittlere Krümmung (1)
- Mobiler Roboter (1)
- Modeling (1)
- Modellierung (1)
- Moment <Stochastik> (1)
- Monodromy (1)
- Monte Carlo Simulation (1)
- Moving mesh method (1)
- Multi-agent systems (1)
- Multi-dimensional SPDEs (1)
- Multiple Repräsentationen (1)
- Multiple representations (1)
- Multivariate Generalized Pareto Distributions (1)
- Multivariate order statistics (1)
- Multivariate statistics (1)
- Multivariate verallgemeine Pareto-Verteilungen (1)
- NCP-Funktionen (1)
- NCP-functions (1)
- Nash Equilibrium Problem (1)
- Nash bargaining problem (1)
- Nash equilibria (1)
- Nash equilibrium (1)
- Navier-Stokes equations (1)
- Navier-Stokes-Gleichung (1)
- Near-Isomorphie (1)
- Nematic Liquid Crystals (1)
- Nematische Flüssigkristalle (1)
- Neue Medien (1)
- Newton method (1)
- Newton methods (1)
- Newton-Raphson Method (1)
- Newton-Raphson Verfahren (1)
- Newtonsches Kräftegleichgewicht (1)
- Newtonverfahren (1)
- Nichtglatte Analysis (1)
- Nichtlineare Funktionalgleichung (1)
- Nichtlinearer Operator (1)
- Nichtlineares System (1)
- Nichtparametrische Statistik (1)
- Nikaido-Isoda Funktion (1)
- Niveaustufen des Beweises (1)
- Non-smooth optimal control (1)
- Non-steroidal anti-inflammatory drug (1)
- Nonlinear systems (1)
- Nonparametric Inference (1)
- Nonsmooth optimization (1)
- Nullstelle (1)
- Numerical Asset Valuation (1)
- Numerical Methods (1)
- Numerik (1)
- One-dimensional SPDEs (1)
- Operatortheorie (1)
- Optimal Control (1)
- Optimal control problem (1)
- Optimalitätsbedingung (1)
- Optimierung / Nebenbedingung (1)
- Optimierung auf Mannigfaltigkeiten (1)
- Optimierungsproblem (1)
- Optimization on Lie Groups (1)
- Order Statistics (1)
- Overstatement models (1)
- PDEs (1)
- Paper-folding (1)
- Papierfalten (1)
- Parabolic equations (1)
- Parabolische Differentialgleichung (1)
- Parametric inference (1)
- Parametric optimization (1)
- Parametrische Optimierung (1)
- Parametrisierung (1)
- Partielle Differentialgleichungen (1)
- Peaks over Threshold (1)
- Penalized Least Squares Method (1)
- Penalized Least Squares Methode (1)
- Periodic homogenization (1)
- Phasenumwandlung (1)
- Piecewise Polynomial Function (1)
- Plasma (1)
- Poisson Gleichung (1)
- Poisson algebras (1)
- Poisson equation (1)
- Poisson-Gleichung (1)
- Poisson-Prozess (1)
- Polyatomare Verbindungen (1)
- Polynomial matrices (1)
- Pontrjagin-Maximumprinzip (1)
- Pontryagin Maximum Principle (1)
- Pontryagin Maximum Prinzip (1)
- Pontryagins's maximum principle (1)
- Post-Quantum-Kryptografie (1)
- Prediction Procedure (1)
- Primzahl (1)
- Probability theory (1)
- Problemlösen (1)
- Prognose (1)
- Projection Theorem (1)
- Projektionssatz (1)
- Proof (1)
- Proving Level (1)
- Proximal Method (1)
- Proximal-Punkt-Verfahren (1)
- Präkonditionierung (1)
- Pseudo-allergy (1)
- Pseudogeodätische (1)
- Pseudometrik (1)
- Quadratischer Zahlkörper (1)
- Quantenmechanik (1)
- Quantenmechanisches System (1)
- Quantum control (1)
- Quasi-Variational Inequality (1)
- Quasi-Variationsungleichung (1)
- Quasi-variational inequalities (1)
- Quasibases (1)
- Quasibasis (1)
- Quasiconformal automorphism (1)
- Quasikonforme Abbildung (1)
- Randomness (1)
- Rangstatistik (1)
- Razumikhin method (1)
- Reachability matrix (1)
- Reelle Funktion (1)
- Registrierung <Bildverarbeitung> (1)
- Regularisation Methods (1)
- Regularisierung (1)
- Regularisierungsverfahren (1)
- Regularized gap function (1)
- Reine Untergruppen (1)
- Rekord (1)
- Relapse (1)
- Relativnormale (1)
- Relaxation method (1)
- Restklasse (1)
- Rezension (1)
- Ridge-Regression (1)
- Riemann Hypothesis (1)
- Riemann hypothesis (1)
- Riemannian manifolds (1)
- Riemannian optimization (1)
- Riemannsche Geometrie (1)
- Riemannsche Mannigfaltigkeiten (1)
- Riemannsche Optimierung (1)
- Risikomanagement (1)
- Risk factor (1)
- Rothe method (1)
- Runge-type Theorems (1)
- STEM classroom (1)
- STEM education (1)
- STEM integration (1)
- Satz von Cartan (1)
- Scheme for solving optimal control problems (1)
- Schnelle Fourier-Transformation (1)
- Schur ring (1)
- Schwache Kompaktheit (1)
- Schwache Lösungen (1)
- Sekundarstufe (1)
- Selberg Class (1)
- Selberg Klasse (1)
- Selbergsche L-Reihe (1)
- Semidefinite Optimierung (1)
- Semidefinite Programme (1)
- Semidualität (1)
- Semidualtität (1)
- Semismooth Newton Method (1)
- Sequential Quadratic Hamiltonian scheme (1)
- Sequential quadratic Hamiltonian scheme (1)
- Set-valued mapping (1)
- Shape Optimization (1)
- Simulieren (1)
- Singulärfunktionen (1)
- Skalierungsfunktion (1)
- Small-Gain Theorem (1)
- Softwareentwicklung (1)
- Sparsity (1)
- Spektraltheorie (1)
- Sphäre (1)
- Spin systems (1)
- Spinsystem (1)
- Spiralflächen (1)
- Spiraltypfläche (1)
- Spiraltypflächen (1)
- Spline (1)
- Stabilitätsanalyse (1)
- Starke Kopplung (1)
- Steuerbarkeit (1)
- Stochastic Algorithms (1)
- Stochastic Process (1)
- Stochastic homogenization (1)
- Stochastik (1)
- Stochastische Optimierung (1)
- Stochastische partielle Differentialgleichung (1)
- Stochastisches System (1)
- Stokes Gleichung (1)
- Stokes equation (1)
- Stokes-Gleichung (1)
- Structrual Model (1)
- Struktur (1)
- Strömung (1)
- Stückweise Polynomiale Funktion (1)
- Symmetrie (1)
- Symmetrien (1)
- Symmetrische Konfiguration (1)
- Symplektische Geometrie (1)
- System von partiellen Differentialgleichungen (1)
- Systemhalbgruppen (1)
- Systemic Risk (1)
- Systemsemigroups (1)
- Sätze (1)
- T3s (1)
- TD Kohn-Sham equations (1)
- TDDFT (1)
- Tail-behavior (1)
- Teaching (1)
- Testen (1)
- Theorems (1)
- Topologieoptimierung (1)
- Torsion-free abelian groups (1)
- Torsionsfreie abelsche Gruppe (1)
- Total Variation (1)
- Totale Variation (1)
- Toulmin Modell (1)
- Transitive Lie Groups (1)
- Transportkoeffizient (1)
- Treatment failure (1)
- Ulm-Kaplansky Invarianten (1)
- Ulm-Kaplansky invariants (1)
- Uncertainty (1)
- Uniform distribution modulo one (1)
- Uniform topology (1)
- Universal Functions (1)
- Universality (1)
- Universalität (1)
- Unsicherheit (1)
- Untergruppe (1)
- Unternehmensbewertung (1)
- Unterraumsuche (1)
- Unterräume (1)
- Uzawa Verfahren (1)
- Uzawa iteration (1)
- Value at Risk (1)
- Value ranges (1)
- Vandermonde matrix (1)
- Variationsungleichung (1)
- Verfahren der konjugierten Gradienten (1)
- Vespula (1)
- Volumen (1)
- Von Mises conditions (1)
- Vorhersagbarkeit (1)
- Vorhersagetheorie (1)
- Vorhersageverfahren (1)
- Vorkonditionierer (1)
- Wahrscheinlichkeitsrechnung (1)
- Wahrscheinlichkeitstheorie (1)
- Wahrscheinlichkeitsverteilung (1)
- Walsh (1)
- Warteschlangennetz (1)
- Weak Solutions (1)
- Weibull distribution (1)
- Weibull type density (1)
- Wein (1)
- Well-Balanced (1)
- Well-posedness (1)
- Wettbewerbsdesign (1)
- ZRP (1)
- Zeitdiskrete Approximation (1)
- Zeitoptimale Regelung (1)
- Zero Range Prozess (1)
- Zero-divisor graph (1)
- Zero-divisor graphs (1)
- Zero-inflation (1)
- Zeta-Functions (1)
- Zeta-function (1)
- Zufall (1)
- Zwei-Ebenen-Optimierung (1)
- a-point distribution (1)
- a-posteriori error estimates (1)
- abelian groups (1)
- accuracy estimate (1)
- adaptive intervention competence (1)
- adaptive refinement (1)
- adjoint and coadjoint representations (1)
- affine minimal surface (1)
- affine normal (1)
- affine scaling methods (1)
- algebraic aggregation (1)
- algebraic degree (1)
- algebraic function field (1)
- algebraische Aggregation (1)
- almost completely decomosable group (1)
- almost completely decomposable group (1)
- almost completely decomposable groups (1)
- analytical continuation (1)
- analytische Fortsetzung (1)
- approaches in textbooks (1)
- asthma (1)
- asymptotic analysis (1)
- asymptotic sufficiency (1)
- asymptotically flat ends (1)
- atomistic models (1)
- augmented Lagrange method (1)
- augmented Lagrangian (1)
- avoidance (1)
- backward orbit (1)
- basic mental model (1)
- beispielsbasiertes Lernen (1)
- bevacizumab (1)
- bi-steerable robot (1)
- bias (1)
- bicommutators (1)
- bilinear evolution model (1)
- black box (1)
- bleb failure (1)
- bleb scarring (1)
- block design (1)
- blood (1)
- body weight (1)
- bound constraints (1)
- bounded input bounded output stability (1)
- branch- and-bound Verfahren (1)
- branch-and-bound algorithm (1)
- buildings (1)
- canal surgery (1)
- canaloplasty (1)
- cancer (1)
- cataract surgery (1)
- centroaffine normal (1)
- chlamydia (1)
- chlamydia infection (1)
- chlamydia trachomatis (1)
- chordal Loewner equation (1)
- circular arc polygon domain (1)
- circumferential viscodilation (1)
- co-extremality coefficient (1)
- coherent forecasting (1)
- cohomology (1)
- coisotropic reduction (1)
- collinearity (1)
- combinatorial theory (1)
- companion matrix (1)
- complementary problems (1)
- complex polynomials (1)
- composite optimization (1)
- composition of functions (1)
- compressible fluid (1)
- computational biology and bioinformatics (1)
- condenser capacity (1)
- conecontinuity type constraint qualification (1)
- conewise linear systems (1)
- confidence interval (1)
- conformal mapping (1)
- conformal pseudo-metrics (1)
- conical domain (1)
- conservation law (1)
- continuum limit (1)
- convergence (1)
- convergence for quadratic eigenvalue problems (1)
- convergent star product (1)
- converse Lyapunov theorems (1)
- convolution (1)
- convolutional code (1)
- copula (1)
- correction (1)
- coverings (1)
- critical line (1)
- cross-ownership (1)
- cubic-monoclinic martensites (1)
- cyclic matrix (1)
- deformation theory (1)
- degrees of freedom in internal energy (1)
- derivation (1)
- derivative (1)
- design (1)
- diagnosis (1)
- differential games (1)
- differential graded Lie algebra (1)
- differential graded modules (1)
- differential nash games (1)
- diffuse interface models (1)
- digital technologies (1)
- digital tools (1)
- digitale Werkzeuge (1)
- discontinuous coefficient functions (1)
- discrete systems (1)
- discrete-time systems (1)
- discrete-to-continuum (1)
- distribution modulo one (1)
- dopamine (1)
- drug adherence (1)
- duality (1)
- dyadic product BMO (1)
- echocardiography (1)
- eigenmode (1)
- elliptic PDE (1)
- elliptic curves (1)
- elliptic problems (1)
- empirical evidence (1)
- empirical investigation (1)
- enatnglement measure (1)
- ensemble optimal control problems (1)
- ensemble reachability (1)
- entropy inequality (1)
- entropy minimization (1)
- epidemiology (1)
- equivariant cohomology (1)
- ergodic transformation (1)
- error estimate (1)
- erstes Randwertproblem in PDE (1)
- euclidean normal (1)
- euklidische Normale (1)
- example based learning (1)
- examples and counterexamples (1)
- exceedance counts (1)
- exchangeable D-norms (1)
- existence of solutions (1)
- expected shortfall (1)
- expectiles (1)
- explicit discontinuous Galerkin (1)
- exponent pairs (1)
- exponential growth (1)
- extremal coefficient (1)
- extremal exchangeable D-norms (1)
- extreme order statistics (1)
- extremum seeking control (1)
- filter-SQPEC algorithm (1)
- financial network (1)
- finanzielles Netzwerk (1)
- finite differences (1)
- finite element method (1)
- finite fields (1)
- finite groups (1)
- finite projective plane (1)
- finite volume method (1)
- finite volume methods (1)
- firm valuation (1)
- first boundary value problem in PDE (1)
- flexible microcatheter (1)
- fluid networks (1)
- fluid–structure interaction model (1)
- foliation (1)
- fracture (1)
- fredholm operator (1)
- freies Randwertproblem (1)
- function identification (1)
- functional D-norm (1)
- functional thinking (1)
- galois extensions (1)
- gamma-convergence (1)
- gap power series (1)
- generalized quadrangles (1)
- generator of D-norm (1)
- geometric control (1)
- geometrically linear elasticity (1)
- geometrisch lineare Elastizitätstheorie (1)
- geometrische Kontrolltheorie (1)
- geometry (1)
- gewichtete Gleichverteilung modulo eins (1)
- gewichtete Sobolevräume (1)
- gewöhnliche Differentialgleichungen (1)
- glaucoma (1)
- glaucoma surgery (1)
- global attractor (1)
- globalized proximal Newton-type method (1)
- gradient-like systems (1)
- gradientenähnliche Systeme (1)
- green energy (1)
- growth models (1)
- halbeinfache Lie Algebren (1)
- harmonic measure (1)
- hierarchical matrix (1)
- higher order methods (1)
- highly-active antiretroviral therapy (1)
- homogene Raüme (1)
- homogeneous spaces (1)
- homogenization (1)
- homogenous parameter-depedent systems (1)
- hydrodynamic limits (1)
- hyperbolic area (1)
- hyperbolic systems (1)
- image denoising (1)
- imaginary quadratic field (1)
- immune activation (1)
- impulsive systems (1)
- infeasible interior point paths (1)
- infinite dimensional optimization (1)
- infinite-dimensional systems (1)
- input-to-state stability (1)
- integral (1)
- integral graph (1)
- interior point methods (1)
- intermediate order statistics (1)
- inverse Iteration (1)
- inverse problems (1)
- isogeny-based cryptography (1)
- isomorph (1)
- isomorphic (1)
- isomorpism (1)
- kinetic chemotaxis equation (1)
- kinetic description of gases (1)
- konforme Pseudo-Metriken (1)
- kubisch-monokliner Phasenübergang (1)
- large-scale (1)
- large-time behaviour (1)
- laser simulation (1)
- linear system (1)
- literature review (1)
- local asymptotic normality (1)
- local classes (1)
- local existence (1)
- local input-to-state stability (1)
- long step methods (1)
- lonlinear reaction-diffusion equations (1)
- lyapunov methods (1)
- macrophages (1)
- magnetic fluids (1)
- mathematical biology (1)
- mathematical paperfolding (1)
- mathematics classrooms (1)
- mathematisches Modellieren (1)
- mathematisches Papierfalten (1)
- matrix decomposition (1)
- max-linear model (1)
- max-stable process (1)
- mid quantiles (1)
- minimal curves (1)
- minimal surface (1)
- mitomycin C (1)
- mittlere Krümmung (1)
- monodromy groups (1)
- months follow-up (1)
- morbid obesity (1)
- multigrid schemes (1)
- multiply connected domain (1)
- multiscale methods (1)
- multiscale modeling (1)
- multivariate Extreme Value Distribution (1)
- multivariate Gaussian distribution (1)
- multivariate exceedance (1)
- multivariate extreme value theory (1)
- multivariate generalized Pareto distribution (1)
- multivariate max-domain of attraction (1)
- multivariate statistical process control (SPC) (1)
- mutually permutable (1)
- near-isomorph (1)
- near-isomorphic (1)
- near-isomorphism (1)
- new media (1)
- nichtglatt (1)
- nichtglatte Newton-artige Verfahren (1)
- nichtholonomes System (1)
- nichtlineare & gemischte Komplementaritätsprobleme (1)
- nichtlineare Optimierung (1)
- non-Lipschitz optimization (1)
- non-convex optimal control problems (1)
- non-smooth and non-convex optimization (1)
- non-smooth large-scale optimisation (1)
- non-smooth optimal control problems (1)
- non-smooth optimization (1)
- nonconvex optimization (1)
- nonconvex smooth term (1)
- nonderogatory matrix (1)
- nonholonomic system (1)
- nonlinear and mixed complementarity problems (1)
- nonlinear inverse problems (1)
- nonlinear least squares reformulation (1)
- nonlinear optimization (1)
- normal families (1)
- normale Familien (1)
- numerical approximations (1)
- numerical finance (1)
- numerical methods (1)
- obesity (1)
- observer (1)
- one-fold-origami (1)
- open-angle glaucoma (1)
- operative principle (1)
- optimal control problem (1)
- optimal control problems (1)
- optimal solution mapping (1)
- optimization on manifolds (1)
- order of growth (1)
- over-determined problem (1)
- p-Gruppen (1)
- p-groups (1)
- p-soluble groups (1)
- p-supersolubility (1)
- parametrization (1)
- partial differential equation (1)
- partial differential equations (1)
- partial differetial equations (1)
- partial integro-differential Fokker-Planck Equation (1)
- partial integro-differential equations (1)
- partielle Differentialgleichungen (1)
- pedestrian motion (1)
- petal (1)
- phacocanaloplasty (1)
- phacotrabeculectomy (1)
- phase I (1)
- phase II (1)
- plasma modelling (1)
- plasma physics (1)
- polyatomic molecules (1)
- polymerase chain reaction (1)
- polymorphism (1)
- polynomial chaos (1)
- pontryagin maximum principle (1)
- pre-service teacher (1)
- preconditioning (1)
- prediction interval (1)
- predictive performance (1)
- professional competence (1)
- projective plane (1)
- projektive Ebene (1)
- proximal Newton method (1)
- proximal method (1)
- proximal methods (1)
- pseudo geodesics (1)
- pulmonary function (1)
- pulmonary hypertension (1)
- pure subgroups (1)
- quadratic convergence (1)
- quadratic number field (1)
- quadratische Konvergenz (1)
- quantile forecasts (1)
- quantum evolution models (1)
- quasi-continuum (1)
- quasinormality constraint qualification (1)
- queueing networks (1)
- radial Loewner equation (1)
- random walk (1)
- reaction-diffusion (1)
- reduced residues (1)
- regression (1)
- regularization (1)
- regulating (1)
- regulating subgroup (1)
- regulierend (1)
- regulierende Untergruppen (1)
- relative normal (1)
- repelling phenomenon (1)
- representation of pseudo-metrics (1)
- representations up to homotopy (1)
- ridge regression (1)
- robustness (1)
- role of mathematics in STEM (1)
- secondary education (1)
- semi-convex hulls (1)
- semi-konvexe Hüllen (1)
- semidefinite Komplementaritätsprobleme (1)
- semidefinite complementarity problems (1)
- semidefinite programming (1)
- semiduality (1)
- semigroup of holomorphic functions (1)
- semilinear elliptic operators (1)
- semisimple Lie algebras (1)
- semismooth (1)
- semismooth Newton-type methods (1)
- sequential optimality condition (1)
- sequential quadratic hamiltonian method (1)
- series (1)
- simulation (1)
- singular functions (1)
- singularly perturbed problem (1)
- singulär gestörtes Problem (1)
- smoothing-type methods (1)
- sparse control problems (1)
- special sweeps (1)
- spectral theory (1)
- spezielle Sweep-Methoden (1)
- spiral surfaces (1)
- spiral type surfaces (1)
- star products (1)
- stationarity (1)
- stochastischer Prozess (1)
- structural model (1)
- structure (1)
- structured normal form problem (1)
- strukturierte Normalformprobleme (1)
- subspace clustering (1)
- subspaces (1)
- successive approximations strategy (1)
- sufficient optimality conditions (1)
- surgical outcomes (1)
- switched systems (1)
- symmetric configuration (1)
- symmetries (1)
- symplectic geometry (1)
- systemic risk (1)
- systems biology (1)
- tail conditional expectation (1)
- tail dependence (1)
- task design (1)
- teacher education (1)
- technology (1)
- tensor rank (1)
- test instrument (1)
- therapy (1)
- tight (1)
- time-optimal control (1)
- time-varying (1)
- time‐varying delay (1)
- torsionfree abelian groups (1)
- torsionsfreie abelsche Gruppen (1)
- total variation (1)
- transport coefficients (1)
- typically real functions (1)
- uncertain volatility (1)
- uncertainty quantification (1)
- univalent functions (1)
- universality (1)
- unstetige Koeffizientenfunktionen (1)
- unzulässige Innere-Punkte-Pfade (1)
- value at risk (1)
- value-distribution (1)
- variational estimation (1)
- variational fracture (1)
- velocity jump process (1)
- velocity-dependent collision frequency (1)
- verallgemeinerte Vierecke (1)
- viral replication (1)
- volume (1)
- vorticity preserving (1)
- weight loss (1)
- weighted Sobolev spaces (1)
- weighted uniform distribution modulo one (1)
- well posedness (1)
- well-balanced (1)
- well-balancing (1)
- wine fermentation (1)
- zeitoptimale Steuerung (1)
- zentroaffine Normale (1)
- zero-finding (1)
- zeta-functions (1)
- zweiachsgelenkter Roboter (1)
- Überschreitungen (1)
- Überschreitungsanzahl (1)
- Čebyšev-Polynome (1)
Institute
- Institut für Mathematik (230) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
In Janssen and Reiss (1988) it was shown that in a location model of a Weibull type sample with shape parameter -1 < a < 1 the k(n) lower extremes are asymptotically local sufficient. In the present paper we show that even global sufficiency holds. Moreover, it turns out that convergence of the given statistical experiments in the deficiency metric does not only hold for compact parameter sets but for the whole real line.
It is shown that the rate of convergence in the von Mises conditions of extreme value theory determines the distance of the underlying distribution function F from a generalized Pareto distribution. The distance is measured in terms of the pertaining densities with the limit being ultimately attained if and only if F is ultimately a generalized Pareto distribution. Consequently, the rate of convergence of the extremes in an lid sample, whether in terms of the distribution of the largest order statistics or of corresponding empirical truncated point processes, is determined by the rate of convergence in the von Mises condition. We prove that the converse is also true.
In distance geometry problems and many other applications, we are faced with the optimization of high-dimensional quadratic functions subject to linear equality constraints. A new approach is presented that projects the constraints, preserving sparsity properties of the original quadratic form such that well-known preconditioning techniques for the conjugate gradient method remain applicable. Very-largescale cell placement problems in chip design have been solved successfully with diagonal and incomplete Cholesky preconditioning. Numerical results produced by a FORTRAN 77 program illustrate the good behaviour of the algorithm.
In this paper, convex approximation methods, suclt as CONLIN, the method of moving asymptotes (MMA) and a stabilized version of MMA (Sequential Convex Programming), are discussed with respect to their convergence behaviour. In an extensive numerical study they are :finally compared with other well-known optimization methods at 72 examples of sizing problems.
The aim of the present paper is to clarify the role of extreme order statistics in general statistical models. This is done within the general setup of statistical experiments in LeCam's sense. Under the assumption of monotone likelihood ratios, we prove that a sequence of experiments is asymptotically Gaussian if, and only if, a fixed number of extremes asymptotically does not contain any information. In other words: A fixed number of extremes asymptotically contains information iff the Poisson part of the limit experiment is non-trivial. Suggested by this result, we propose a new extreme value model given by local alternatives. The local structure is described by introducing the space of extreme value tangents. It turns out that under local alternatives a new class of extreme value distributions appears as limit distributions. Moreover, explicit representations of the Poisson limit experiments via Poisson point processes are found. As a concrete example nonparametric tests for Frechet type distributions against stochastically larger alternatives are treated. We find asymptotically optimal tests within certain threshold models.
In dieser Arbeit beschäftigen wir uns mit Themen aus der affinen Hyperflächentheorie. Nachdem wir die euklidische Normale, die Blaschkesche Affinnormale, eine gewisse Einparameterfamilie von Relativnormalen und die zentroaffine Normale besprochen und eine neue Einparameterfamilie von Relativnormalen definiert haben, behandeln wir die folgenden drei Schwerpunkte: Zuerst befassen wir uns mit Minimalflächen bezüglich verschiedener Volumina und der Rolle der jeweiligen Mittleren Krümmung. Wir berechnen die erste und zweite Variation der Volumina, die von den Normalen der erwähnten Familien induziert werden. Hierbei stellen wir fest, daß die Mittlere Krümmung nicht immer das Verschwinden der ersten Variation des Volumens anzeigt. Anschließend übertragen wir die Begriffe Adjungierte und Assoziierte bei euklidischen Minimalflächen auf Affinminimalflächen: Analog zum euklidischen Fall kann man die Konormale einer Affinminimalfläche durch bestimmte ,,harmonische'' Abbildungen darstellen. Wir geben eine Methode an, wie man aus einer gegebenen Affinminimalfläche weitere gewinnt, indem man diese Abbildungen entsprechend modifiziert. Schließlich lösen wir eine Verallgemeinerung des Björlingschen Problems für Normalen der oben erwähnten Familien: Bei Vorgabe einer Kurve mit zwei Vektorfeldern und der Art der Normalisierung existiert - mit Ausnahmen - je genau eine elliptische und eine hyperbolische Fläche in (pseudo-)isothermen Parametern mit folgenden Eigenschaften: Die Kurve ist eine Parameterlinie, die Normale längs der Kurve stimmt mit dem einen Vektorfeld überein, die Konormale mit dem anderen und die Mittlere und Gaußsche Krümmung erfüllen eine vorgegebene Bedingung.
Charakteristisch für die Lösbarkeit von elliptischen partiellen Differentialgleichungssystemen mit Nebenbedingungen ist das Auftreten einer inf-sup-Bedingung. Im prototypischen Fall der Stokes-Gleichungen ist diese auch als Ladyzhenskaya-Bedingung bekannt. Die Gültigkeit dieser Bedingung, bzw. die Existenz der zugehörigen Konstante ist eine Eigenschaft des Gebietes, innerhalb dessen die Differentialgleichung gelöst werden soll. Während die Existenz schon die Lösbarkeit garantiert, ist beispielsweise für Fehleraussagen bei der numerischen Approximation auch die Größe der Konstanten sehr wichtig. Insbesondere auch deshalb, weil eine ähnliche inf-sup-Bedingung auch bei der Diskretisierung mittel Finiter-Elemente-Methoden auftaucht, die hier Babuska-Brezzi-Bedingung heißt. Die Arbeit befaßt sich auf der einen Seite mit einer analytischen Abschätzung der Ladyzhenskaya-Konstante für verschiedene Gebiete, wobei Äquivalenzen mit verwandten Problemen aus der komplexen Analysis (Friedrichs-Ungleichung) und der Strukturmechanik (Kornsche Ungleichung) benutzt werden. Ein weiterer Teil befaßt sich mit dem Zusammenhang zwischen kontinuierlicher Ladyzhenskaya- Konstante und diskreter Babuska-Brezzi-Konstante. Die dabei gefundenen Ergebnisse werden mit Hilfe eines dazu entwickelten leistungsfähigen Finite-Elemente-Programmsystems numerisch verifiziert. Damit können erstmals genaue Abschätzungen der Konstanten in zwei und drei Dimensionen gefunden werden. Aufbauend auf diesen Resultaten wird ein schneller Lösungsalgorithmus für die Stokes-Gleichungen vorgeschlagen und anhand von problematischen Gebieten dessen Überlegenheit gegenüber klassischen Verfahren wie beispielsweise der Uzawa-Iteration demonstriert. Während selbst bei einfachen Geometrien eine Konvergenzbeschleunigung um einen Faktor 5 erwartet werden kann, sind in kritischen Fällen Faktoren bis zu 1000 möglich.
Teil 1 der Arbeit beinhaltet eine Zusammenfassung grundlegender funktionalanalytischer Ergebnisse sowie eine Einführung in die Integral- und Differentialrechnung in Frécheträumen. Insbesondere wird in Kapitel 2 eine ausführliche Darstellung des Lebesgue-Bochner-Integrals auf Frécheträumen geliefert. Teil 2 behandelt die Theorie der linearen Differentialgleichungen auf Frécheträumen. Dazu werden in Kapitel 3 stark differenzierbare Halbgruppen und deren infinitesimale Generatoren charakterisiert. In Kapitel 4 werden diese Ergebnisse benutzt, um lineare Evolutionsgleichungen (von hyperbolischem oder parabolischem Typ) zu untersuchen. Teil 3 enthält die zentralen Resultate der Arbeit. In Kapitel 5 werden zwei Existenz- und Eindeutigkeitssätze für nichtlineare gewöhnliche Differentialgleichungen in zahmen Frécheträumen bewiesen. Kapitel 6 liefert eine Anwendung der Ergebnisse aus Kapitel 5 auf nichtlineare partielle Differentialgleichungen erster Ordnung.
A completely decomposable group is a direct sum of subgroups of the rationals. An almost completely decomposable group is a torsion free abelian group that contains a completely decomposable group as subgroup of finite index. Tight subgroups are maximal subgroups (with respect to set inclusion) among the completely decomposable subgroups of an almost completely decomposable group. In this dissertation we show an extended version of the theorem of Bezout, give a new criterion for the tightness of a completely decomposable subgroup, derive some conditions under which a tight subgroup is regulating and generalize a theorem of Campagna. We give an example of an almost completely decomposable group, all of whose regulating subgroups do not have a quotient with minimal exponent. We show that among the types of elements of a coset modulo a completely decomposable group there exists a unique maximal type and define this type to be -the- coset type. We give criteria for tightness and regulating in term of coset types as well as a representation of the type subgroups using coset types. We introduce the notion of reducible cosets and show their key role for transitions from one completely decomposable subgroup up to another one containing the first one as a proper subgroup. We give an example of a tight, but not regulating subgroup which contains the regulator. We develop the notion of a fully single covered subset of a lattice, show that V-free implies fully single covered, but not necessarily vice versa, and we define an equivalence relation on the set of all finite subsets of a given lattice. We develop some extension of ordinary Hasse diagrams, and apply the lattice theoretic results on the lattice of types and almost completely decomposable groups.
In this thesis we investigate near-isomorphism classes and isomorphism classes of almost completely decomposable groups. In Chapter 2 we introduce the concept of almost completely decomposable groups and sum up their most important facts. A local group is an almost completely decomposable group with a primary regulator quotient. A uniform group is a rigid local group with a homocyclic regulator quotient. In Chapter 3 a weakening of isomorphism, called type-isomorphism, appears. It is shown that type-isomorphism agrees with Lady's near-isomorphism. By the Main Decomposition Theorem and the Primary Reduction Theorem we are allowed to restrict ourselves on clipped local groups, namely groups without a direct rank-one summand. In Chapter 4 we collect facts of matrices over commutative rings with an identity element. Matrices over the local ring (Z / p^e Z) of residue classes of the rational integers modulo a prime power play an important role. In Chapter 5 we introduce representing matrices of finite essential extensions. Here a normal form for local groups is found by the Gauß algorithm. Uniform groups have representing matrices in Hermite normal form. The classification problems for almost completely decomposable groups up to isomorphism and up to near-isomorphism can be rephrased as equivalence problems for the representing matrices. In Chapter 6 we derive a criterion for the representing matrices of local groups in Gauß normal form. In Chapter 7 we formulate the matrix criterion for uniform groups. Two representing matrices in Hermite normal form describe isomorphic groups if and only if the rest blocks of the representing matrices are T-diagonally equivalent. Starting from a fixed near-isomorphism class in Chapter 8 we investigate isomorphism classes of uniform groups. We count groups and isomorphism classes. In Chapter 9 we specialize on uniform groups of rank 2r with a regulator quotient of rank r such that the rest block of the representing matrix is invertible and normed.
In my Ph.D. thesis "On the geometry and parametrization of almost invariant subspaces and observer theory" I consider the set of almost conditioned invariant subspaces of fixed dimension for a given fixed linear finite-dimensional time-invariant observable control system in state space form. Almost conditioned invariant subspaces were introduced by Willems. They generalize the concept of a conditioned invariant subspace requiring the invariance condition to hold only up to an arbitrarily small deviation in the metric of the state space. One of the goals of the theory of almost conditioned invariant subspaces was to identify the subspaces appearing as limits of sequences of conditioned invariant subspaces. An example due to {\"O}zveren, Verghese and Willsky, however, shows that the set of almost conditioned invariant subspaces is not big enough. I address this question in a joint paper with Helmke and Fuhrmann (Towards a compactification of the set of conditioned invariant subspaces, Systems and Control Letters, 48(2):101-111, 2003). Antoulas derived a description of conditioned invariant subspaces as kernels of permuted and truncated reachability matrices of controllable pairs of the appropriate size. This description was used by Helmke and Fuhrmann to construct a diffeomorphism from the set of similarity classes of certain controllable pairs onto the set of tight conditioned invariant subspaces. In my thesis I generalize this result to almost conditioned invariant subspaces describing them in terms of restricted system equivalence classes of controllable triples. Furthermore, I identify the controllable pairs appearing in the kernel representations of conditioned invariant subspaces as being induced by corestrictions of the original system to the subspace. Conditioned invariant subspaces are known to be closely related to partial observers. In fact, a tracking observer for a linear function of the state of the observed system exists if and only if the kernel of that function is conditioned invariant. In my thesis I show that the system matrices of the observers are in fact the corestrictions of the observed system to the kernels of the observed functions. They in turn are closely related to partial realizations. Exploring this connection further, I prove that the set of tracking observer parameters of fixed size, i.e. tracking observers of fixed order together with the functions they are tracking, is a smooth manifold. Furthermore, I construct a vector bundle structure for the set of conditioned invariant subspaces of fixed dimension together with their friends, i.e. the output injections making the subspaces invariant, over that manifold. Willems and Trentelman generalized the concept of a tracking observer by including derivatives of the output of the observed system in the observer equations (PID-observers). They showed that a PID-observer for a linear function of the state of the observed system exists if and only if the kernel of that function is almost conditioned invariant. In my thesis I replace PID-observers by singular systems, which has the advantage that the system matrices of the observers coincide with the matrices appearing in the kernel representations of the subspaces. In a second approach to the parametrization of conditioned invariant subspaces Hinrichsen, M{\"u}nzner and Pr{\"a}tzel-Wolters, Fuhrmann and Helmke and Ferrer, F. Puerta, X. Puerta and Zaballa derived a description of conditioned invariant subspaces in terms of images of block Toeplitz type matrices. They used this description to construct a stratification of the set of conditioned invariant subspaces of fixed dimension into smooth manifolds. These so called Brunovsky strata consist of all the subspaces with fixed restriction indices. They constructed a cell decomposition of the Brunovsky strata into so called Kronecker cells. In my thesis I show that in the tight case this cell decomposition is induced by a Bruhat decomposition of a generalized flag manifold. I identify the adherence order of the cell decomposition as being induced by the reverse Bruhat order.
Spiraltypflächen sind Minimalflächen des dreidimensionalen euklidischen Raums, die sich durch hohe Symmetrie gegenüber komplexen Ähnlichkeitsabbildungen der Minimalkurve auszeichnen. Ihren Namen verdanken Sie folgender Eigenschaft: Sie und ihre komplex Homothetischen sind die einzigen auf Spiralflächen abwickelbaren Minimalflächen. Bekannte Spiraltypflächen sind die Spiralminimalflächen (zugleich Minimal- und Spiralflächen) und die Bourflächen (auf Rotationsflächen abwickelbare Minimalflächen). Das Katenoid und die Enneperfläche sind spezielle Bourflächen. In dieser Arbeit werden die Spiraltypflächen auf ihre geometrischen Eigenschaften untersucht. Wir stellen ihre Periodizitäten und Symmetrien fest und versuchen, ausgezeichnete Flächenkurven auf ihnen zu finden. Wir verwenden eine globale Weierstraß-Darstellung der Spiraltypflächen. In dieser Darstellung ergeben die Flächen eine Schar mit einem komplexen Scharparameter. Anhand dieser Darstellung leiten wir sämtliche Symmetrien der Spiraltypflächen zu linearen Ähnlichkeitsabbildungen der Minimalkurve her. Als Spezialfälle erhalten wir die Symmetrien unter Assoziationen und Derivationen (Drehung der Minimalkurve um einen imaginären Drehwinkel), sowie die reellen Symmetrien (Dreh-, Spiegel- und Strecksymmetrien). Unter den Spiraltypflächen gibt es nur zwei translationssymmetrische Flächen. Die Umorientierung einer Spiraltypfläche entspricht (bis auf komplexe Homothetie) dem Vorzeichenwechsel des Flächenparameters. Im Übrigen kann durch einfache Spiegelungen an den Koordinatenebenen beziehungsweise Drehungen um die Koordinatenachsen das Vorzeichen von Real- beziehungsweise Imaginärteil des Flächenparameters umgekehrt werden. Schließlich stellen wir noch ausgezeichnete Flächenkurven auf den Spiraltypflächen vor: Krümmungslinien, Asymptotenlinien und Geodätische, sowie als deren Verallgemeinerungen die Pseudokrümmungslinien und Pseudogeodätischen.
Die vorliegende Arbeit untersucht die Analytizitätseigenschaften unzulässiger Innerer-Punkte Pfade bei monotonen Komplementaritätsproblemen und diskutiert mögliche algorithmische Anwendungen. In Kapitel 2 werden einige matrixanalytische Konzepte und Resultate zusammengestellt, die für die Beweisführung in den folgenden Kapiteln benötigt werden. Kapitel 3 gibt eine genaue Definition der Begriffe "monotones lineares Komplementaritätsproblem" (LCP) bzw. "semidefinites monotones lineares Komplementaritätsproblem" (SDLCP) und zeigt die Grundidee hinter den Innere-Punkte-Verfahren zur Lösung solcher Probleme. Kapitel 4 beinhaltet die analytischen Hauptresultate für monotone Komplementaritätsprobleme. In Abschnitt 4.1 werden einige wohlbekannte Resultate über die Analytizitätseigenschaften unzulässiger Innerer-Punkte-Pfade für LCP's wiedergegeben. Diese werden in Abschnitt 4.2 auf den semidefiniten Fall übertragen. Unter der Annahme, dass das zugrundeliegende SDLCP eine strikt komplementäre Lösung besitzt, wird gezeigt, dass die Inneren-Punkte-Pfade sogar noch im Randpunkt analytisch sind. Kapitel 5 benutzt die Resultate aus Kapitel 4, um die lokal hohe Konvergenzordnung einer Langschrittmethode zur Lösung von SDLCP's zu zeigen. Kapitel 6 führt eine neue Methode zur Lösung von LCP's und SDLCP's mit Hilfe von Inneren-Punkte-Techniken ein. Dabei werden die Pfadfunktionen derart gewählt, dass alle Iterierten auf unzulässigen zentralen Pfaden liegen. Es wird globale und lokale Konvergenz des Verfahrens bewiesen.
We consider homogeneous spaces G/H with the same rational homotopy as a product of a 1-sphere and a (m+1)-sphere. We show that these spaces have also the rational cohomology of such a sphere product if H is connected and if the quotient has dimension m+2. Furthermore, we prove that if additionally the fundamental group of G/H is cyclic, then G/H is locally a product of a 1-torus and ofA/H, where A/H is a simply connected rational cohomology (m+1)-sphere (and hence classified). If H fails to be connected, then with U as the connected component of H the G-action on the covering space G/U of G/H has connected stabilizers, and the results apply to G/U. To show that under the assumptions above every natural number may be realized as the order of the group of connected components of H we calculate the cohomology of certain homogeneous spaces. We also determine the rational cohomology of the fibre bundle U-->G-->G/U if G/H meets the assumptions above. This is done by considering the respective Leray-Serre spectral sequence. The structure of the cohomology of U-->G-->G/U then gives a second proof for the structure of compact connected Lie groups acting transitively on spaces with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere. Since a quotient of a homogeneous space with the same rational homotopy or cohomology as a product of a 1-sphere and a (m+1)-sphere is not simply connected, there often arises the question whether or not a considered fibre bundle or fibration is orientable. A large amount of space will therefore be given to the problem of showing that certain fibrations are orientable. For compact connected (m+2)-manifolds with cyclic fundamental groups and with the rational homotopy of a product of a 1-sphere and a (m+1)-sphere we show the following: if a connected Lie group acts transitively on the manifold, then the maximal compact subgroups are either transitive, or their orbits are simply connected rational cohomology spheres of codimension 1. Homogeneous spaces with the same rational cohomology or homotopy as a a product of a 1-sphere and a (m+1)-sphere play a role in the study of different types of geometrical objects. They appear for example as focal manifolds of isoparametric hypersurfaces with four distinct principal curvatures. Further examples of such spaces are the point spaces and the line spaces of compact connected generalized quadrangles. We determine the isometry groups of isoparametric hypersurfaces with 4 principal curvatures of multiplicities 1 and m which are transitive on the focal manifold with non-trivial fundamental group. Buildings were introduced by Jacques Tits to give interpretations of simple groups of Lie type. They are a far-reaching generalization of projective spaces, in particular a generalization of projective planes. There is another generalization of projective planes called generalized polygons. A projective plane is the same as a generalized triangle. The generalized polygons are also contained in the class of buildings: they are the buildings of rank 2. To compact quadrangles one can assign a pair of natural numbers called the topological parameters of the quadrangles. We treat the case k=1. It turns out that there are no other point-transitive compact connected Lie groups for (1,m)-quadrangles than the ones for the real orthogonal quadrangles. Furthermore, we solve the problem of three infinite series of group actions which Kramer left as open problems; there are no quadrangles with the homogeneous spaces in question as point spaces (up to maybe a finite number of small parameters in one of the three series).
Ein bekanntes heuristisches Prinzip von A. Bloch beschreibt die Korrespondenz zwischen Kriterien für die Konstanz ganzer Funktionen und Normalitätskriterien. In der vorliegenden Dissertation untersuchen wir die Gültigkeit des Blochschen Prinzip bei Lückenreihenproblemen sowie Zusammenhänge zwischen Normalitätsfragen und der Semidualität von einer bzw. von zwei Funktionen. Die ersten beiden Kapitel stellen die im folgenden benötigten Hilfsmittel aus der Nevanlinnaschen Wertverteilungstheorie und der Normalitätstheorie bereit. Im dritten Kapitel beweisen wir ein neues Normalitätskriterium für Familien holomorpher Funktionen, für die ein Differentialpolynom einer bestimmten Gestalt nullstellenfrei ist. Dies verallgemeinert frühere Resultate von Hayman, Drasin, Langley und Chen & Hua. Kapitel 4 ist dem Beweis eines unserer im folgenden wichtigsten Hilfsmittel gewidmet: eines tiefliegenden Konvergenzsatzes von H. Cartan über Familien von p-Tupeln holomorpher nullstellenfreier Funktionen, welche einer linearen Relation unterliegen. In Kapitel 5 werden die Konzepte der Dualität und Semidualität eingeführt und die Verbindung zu Normalitätsfragen diskutiert. Die neuen Ergebnisse über Lückenreihen finden sich im sechsten Kapitel. Der Schwerpunkt liegt hierbei zum einen auf sog. AP-Lückenreihen, zum anderen auf allgemeinen Konstruktionsverfahren, mit denen sich neue semiduale Lückenstrukturen aus bereits bekannten gewinnen lassen. Zahlreiche unserer Beweise beruhen wesentlich auf dem Satz von Cartan aus Kapitel 4. Im siebten Kapitel erweitern wir unsere Semidualitätsuntersuchungen auf Mengen aus zwei Funktionen. Wir ziehen Normalitätskriterien (vor allem das in Kapitel 3 bewiesene sowie den Satz von Cartan) heran, um spezielle Mengen als nichtsemidual zu identifizieren. Zuletzt konstruieren wir ein Beispiel einer semidualen Menge aus zwei Funktionen.
The classification of isoparametric hypersurfaces in spheres with a homogeneous focal manifold is a project that has been started by Linus Kramer. It extends results by E. Cartan and Hsiang and Lawson. Kramer does most part of this classification in his Habilitationsschrift. In particular he obtains a classification for the cases where the homogeneous focal manifold is at least 2-connected. Results of E. Cartan, Dorfmeister and Neher, and Takagi also solve parts of the classification problem. This thesis completes the classification. We classify all closed isoparametric hypersurfaces in spheres with g>2 distinct principal curvatures one of whose multiplicities is 2 such that the lower dimensional focal manifold is homogeneous. The methods are essentially the same as in Kramer's 'Habilitationsschrift'. The cohomology of the focal manifolds in question is known. This leads to two topological classification problems, which are also solved in this thesis. We classify simply connected homogeneous spaces of compact Lie groups with the same integral cohomology ring as a product of spheres S^2 x S^m and m odd on the one hand and a truncated polynomial ring Q[a]/(a^m) with one generator of even degree and m > 1 as its rational cohomology ring on the other hand.
The point of departure for the present work has been the following free boundary value problem for analytic functions $f$ which are defined on a domain $G \subset \mathbb{C}$ and map into the unit disk $\mathbb{D}= \{z \in \mathbb{C} : |z|<1 \}$. Problem 1: Let $z_1, \ldots, z_n$ be finitely many points in a bounded simply connected domain $G \subset \mathbb{C}$. Show that there exists a holomorphic function $f:G \to \mathbb{D}$ with critical points $z_j$ (counted with multiplicities) and no others such that $\lim_{z \to \xi} \frac{|f'(z)|}{1-|f(z)|^2}=1$ for all $\xi \in \partial G$. If $G=\mathbb{D}$, Problem 1 was solved by K?nau [5] in the case of one critical point, and for more than one critical point by Fournier and Ruscheweyh [3]. The method employed by K?nau, Fournier and Ruscheweyh easily extends to more general domains $G$, say bounded by a Dini-smooth Jordan curve, but does not work for arbitrary bounded simply connected domains. In this paper we present a new approach to Problem 1, which shows that this boundary value problem is not an isolated question in complex analysis, but is intimately connected to a number of basic open problems in conformal geometry and non-linear PDE. One of our results is a solution to Problem 1 for arbitrary simply connected domains. However, we shall see that our approach has also some other ramifications, for instance to a well-known problem due to Rellich and Wittich in PDE. Roughly speaking, this paper is broken down into two parts. In a first step we construct a conformal metric in a bounded regular domain $G\subset \mathbb{C}$ with prescribed non-positive Gaussian curvature $k(z)$ and prescribed singularities by solving the first boundary value problem for the Gaussian curvature equation $\Delta u =-k(z) e^{2u}$ in $G$ with prescribed singularities and continuous boundary data. This is related to the Berger-Nirenberg problem in Riemannian geometry, the question which functions on a surface R can arise as the Gaussian curvature of a Riemannian metric on R. The special case, where $k(z)=-4$ and the domain $G$ is bounded by finitely many analytic Jordan curves was treated by Heins [4]. In a second step we show every conformal pseudo-metric on a simply connected domain $G\subseteq \mathbb{C}$ with constant negative Gaussian curvature and isolated zeros of integer order is the pullback of the hyperbolic metric on $\mathbb{D}$ under an analytic map $f:G \to \mathbb{D}$. This extends a theorem of Liouville which deals with the case that the pseudo-metric has no zeros at all. These two steps together allow a complete solution of Problem 1. Contents: Chapter I contains the statement of the main results and connects them with some old and new problems in complex analysis, conformal geometry and PDE: the Uniformization Theorem for Riemann surfaces, the problem of Schwarz-Picard, the Berger-Nirenberg problem, Wittich's problem, etc.. Chapter II and III have preparatory character. In Chapter II we recall some basic results about ordinary differential equations in the complex plane. In our presentation we follow Laine [6], but we have reorganized the material and present a self-contained account of the basic features of Riccati, Schwarzian and second order differential equations. In Chapter III we discuss the first boundary value problem for the Poisson equation. We shall need to consider this problem in the most general situation, which does not seem to be covered in a satisfactory way in the existing literature, see [1,2]. In Chapter IV we turn to a discussion of conformal pseudo-metrics in planar domains. We focus on conformal metrics with prescribed singularities and prescribed non-positive Gaussian curvature. We shall establish the existence of such metrics, that is, we solve the corresponding Gaussian curvature equation by making use of the results of Chapter III. In Chapter V we show that every constantly curved pseudo-metric can be represented as the pullback of either the hyperbolic, the euclidean or the spherical metric under an analytic map. This is proved by using the results of Chapter II. Finally we give in Chapter VI some applications of our results. [1,2] Courant, H., Hilbert, D., Methoden der Mathematischen Physik, Erster/ Zweiter Band, Springer-Verlag, Berlin, 1931/1937. [3] Fournier, R., Ruscheweyh, St., Free boundary value problems for analytic functions in the closed unit disk, Proc. Amer. Math. Soc. (1999), 127 no. 11, 3287-3294. [4] Heins, M., On a class of conformal metrics, Nagoya Math. J. (1962), 21, 1-60. [5] K?nau, R., L?gentreue Randverzerrung bei analytischer Abbildung in hyperbolischer und sph?ischer Geometrie, Mitt. Math. Sem. Giessen (1997), 229, 45-53. [6] Laine, I., Nevanlinna Theory and Complex Differential Equations, de Gruyter, Berlin - New York, 1993.
In dieser Arbeit werden Algorithmen zur Lösung von linearen semidefiniten Programmen beschrieben. Unter einer geeigneten Regularitätsvoraussetzung ist ein semidefinites Programm äquivalent zu seinen Optimalitätsbedingungen. Die Optimalitätsbedingungen bzw. die Zentralen-Pfad-Bedingungen überführen wir zunächst durch matrixwertige NCP-Funktionen in ein nichtlineares Gleichungssystem. Dieses nichtlineare und teilweise nicht differenzierbare Gleichungssystem lösen wir dann mit einem Newton-ähnlichen Verfahren. Durch die Umformulierung in ein nichtlineares Gleichungssystem muss während der Iteration nicht mehr explizit die positive (Semi-)Definitheit der beteiligten Matrizen beachtet werden. Weiter wird gezeigt, dass dieser Ansatz im Gegensatz zu Inneren-Punkte-Methoden sofort symmetrische Suchrichtungen erzeugt. Um globale Konvergenz zu erhalten, werden verschiedene Globalisierungsstrategien (Schrittweitenbestimmung, Trust-Region-Ansatz) untersucht. Für das betrachtete Prädiktor-Korrektor-Verfahren und das Trust-Region-Verfahren wird lokal superlineare Konvergenz unter strikter Komplementarität und Nichtdegeneriertheit gezeigt. Die theoretische Untersuchung eines nichtglatten Newton-Verfahrens liefert ein lokal quadratisches Konvergenzverhalten ohne strikte Komplementarität, wenn die Nichtdegeneriertheitsvoraussetzung geeignet modifiziert wird.
Reine Untergruppen von vollständig zerlegbaren torsionsfreien abelschen Gruppen werden Butlergruppen genannt. Eine solche Gruppe läßt sich als endliche Summe von rationalen Rang-1-Gruppen darstellen. Eine solche Darstellung ist nicht eindeutig. Daher werden Methoden entwickelt, die zu einer Darstellung mit reinen Summanden führen. Weiter kann aus dieser Darstellung sowohl die kritische Typenmenge als auch die Typuntergruppen direkt abgelesen werden. Dies vereinfacht die Behandlung von Butlergruppen mit dem Computer und gestattet darüberhinaus eine elegantere Darstellung.
Ziel dieser Arbeit ist eine computerunterstützte Suche nach, bis auf Isomorphie, allen projektiven Ebenen zu einer gegebenen Ordnung durch Berechnung ihrer Inzidenzmatrix. Dies gelingt durch geeignete Vorstrukturierung der Matrix mit Hilfe der Doppelordnung bis Ordnung 9 auf einem aktuellen PC. In diesem Zusammenhang ist insbesondere durch einen genügend schnellen Algorithmus das Problem zu lösen, ob zwei Inzidenzmatrizen zu derselben projektiven Ebene gehören. Die besondere Struktur, die die berechneten Beispiele von doppelgeordneten Inzidenzmatrizen der desarguesschen Ebenen aufzeigen, wird zudem durch theoretische Überlegungen untermauert. In einem letzten Kapitel wird noch eine Verbindung der projektiven Ebenen zu besonderen Blockplänen geschaffen.
Ausgangspunkt dieser Arbeit war eine Publikation von D. Braess [Bra01], in der die Approximationsgüte der Funktionen $$ \frac{1}{((x-x_0)^2 + (y-y_0)^2)^s}, \qquad x_0^2 + y_0^2 \ge 1, \quad s \in (0,\infty),$$ auf der Einheitskreisscheibe $x^2+y^2 \le 1$ durch reelle Polynome untersucht wurde. Braess's Ergebnisse und insbesondere die von ihm angesprochenen offenen Probleme waren von besonderem Interesse, da sie Anlaß zu der Vermutung gaben, dass die klassische Theorie der ``Maximalen Konvergenz'' in Sinne von Walsh auf (zunächst) die oben erwähnten reell analytischen Funktionen erweitert werden kann. (Die Theorie der Maximalen Konvergenz bringt die Approximationsgüte einer Funktion auf einer kompakten Menge durch Polynome mit der Analyzität dieser Funktion in Verbindung.) \\ Hauptgegenstand der Arbeit ist die Erweiterung des klassischen ``Maximalen Konvergenz''--Konzeptes auf reell analytische Funktionen in höheren Dimensionen. Es werden verschiedene maximale Konvergenzsätze sowohl in einer als auch in mehreren Veränderlichen bewiesen. \\ Die Arbeit gliedert sich in drei Hauptteile. \\[2mm] Im ersten Teil wird der theoretische Hintergrund der ``Maximalen Konvergenz'' mit dem Problemkreis von Braess in Zusammenhang gebracht. Es wird gezeigt, dass für betrags-quadratisch holomorphe Funktionen folgender Satz gilt: \\ { \bf {Satz 1}}: Es sei $g$ eine holomorphe Funktion auf der abgeschlossenen Einheitskreisscheibe $\overline{\mathbb{D}}:=\{ z \in \mathbb{C} : |z| \le 1\}$ und $F(x,y):= |g(x+iy)|^2$, $x,y \in \mathbb{R}$. Dann gilt: $$ \limsup_{n \to \infty} \sqrt[n]{E_n ( \overline{\mathbb{D}},F)} = \frac{1}{\rho}$$ genau dann, wenn $g$ auf $ \{ z \in \mathbb{C} : |z| < \rho \}$ holomorph ist, aber auf keiner echt gr\"o\3eren Kreisscheibe, wobei $$ E_n ( \overline{\mathbb{D}},F)= \inf \{ ||F -P_n||_{\overline{\mathbb{D}}}, \, P_n: \mathbb{R}^2 \to \mathbb{R} \mbox{ Polynom vom Grad } \le n \}.$$ Dieser Satz beinhaltet nicht nur die Ergebnisse von Braess [Bra01], sondern erweitert ihn, und beantwortet die von Braess aufgeworfenen Fragen vollständig. Zudem zeigt der Satz die genaue Analogie des klassischen ``Maximalen Konvergenz''--Konzeptes für die Funktionenklasse der betrag--quadratisch holomorphen Funktionen im $\mathbb{R}^2$. \\[2mm] In der Literatur gibt es viele Verallgemeinerungen des ``Maximalen Konvergenz''--Begriffes für mehrere komplexe Veränderlichen. Im Hinblick auf die vorliegende Arbeit sind besonders die Artikel [Sic62] und [Sic81] zu erwähnen. Diese bereits bekannten Ergebnisse werden im zweiten Teil der Arbeit herangezogen, um den ``Maximalen Konvergenz''--Begriff auf mehrere reelle Veränderlichen zu erweitern. Man beachte, dass der entscheidende Unterschied hier in der polynomialen Approximationsklasse liegt. \\[2mm] Der dritte Teil befaßt sich mit der Verallgemeinerung des Satzes 1 in mehreren Veränderlichen. Eng verbunden mit diesem Problemkreis ist die Charakterisierung einer gewissen Extremalfunktion. Diese Funktion wird zur Bestimmung des Analyzitätsbereichs der zu approximierenden Funktion benötigt. Mittels geeigneter Darstellung der Extremalfunktion und Charakterisierung des Analyzitätsbereichs gelingt es schließlich, den folgenden Hauptsatz der vorliegenden Arbeit zu beweisen:\\ { \bf { Satz 2}}: Es seien $g,h$ holomorphe Funktionen auf der abgeschlossenen Einheitskugel $\overline{\mathbb{D}}_N:=\{ z \in \mathbb{C}^N : |z| \le 1\}$ und $F(x,y):= g(x+iy) \overline{h(x+iy)}$, $x,y \in \mathbb{R}^N$. Dann gilt: $$ \limsup_{n \to \infty} \sqrt[n]{E_n ( \overline{\mathbb{D}}_N,F)} = \frac{1}{\rho}$$ genau dann, wenn $g,h$ auf ${\mathbb{D}}_{N,\rho}:= \{ z \in \mathbb{C}^N : |z| < \rho \}$ holomorph sind, und mindestens eine der zwei Funktionen $g,h$ auf keinem echt gr\"o\3eren Ball als $\mathbb{D}_{N,\rho}$ holomorph fortsetzbar ist. Hierbei bezeichnet $$ E_n ( \overline{\mathbb{D}}_N,F)= \inf \{ ||F -P_n||_{\overline{\mathbb{D}}_N}, \, P_n: \mathbb{R}^{2N} \to \mathbb{C} \mbox{ Polynom vom Grad } \le n \}.$$ $[$Bra01$]$ Braess, D., {\it Note on the Approximation of Powers of the Distance in Two-Dimensional Domains}, Constructive Approximation (2001), {\bf 17} No. 1, 147-151. \\ $[$Sic62$]$ Siciak, J., {\it On some extremal functions and their applications in the theory of analytic functions of several complex variables}, Trans. Amer. Math. Soc. (1962), {\bf 105}, 322--357. \\ $[$Sic81$]$ Siciak, J., {\it Extremal plurisubharmonic functions in $\mathbb{C}^N$}, Ann. Pol. Math. (1981), {\bf 39}, 175--211.
In this thesis a new and powerful approach for modeling laser cavity eigenmodes is presented. This approach is based on an eigenvalue problem for singularly perturbed partial differential operators with complex coefficients; such operators have not been investigated in detail until now. The eigenvalue problem is discretized by finite elements, and convergence of the approximate solution is proved by using an abstract convergence theory also developed in this dissertation. This theory for the convergence of an approximate solution of a (quadratic) eigenvalue problem, which particularly can be applied to a finite element discretization, is interesting on its own, since the ideas can conceivably be used to handle equations with a more complex nonlinearity. The discretized eigenvalue problem essentially is solved by preconditioned GMRES, where the preconditioner is constructed according to the underlying physics of the problem. The power and correctness of the new approach for computing laser cavity eigenmodes is clearly demonstrated by successfully simulating a variety of different cavity configurations. The thesis is organized as follows: Chapter 1 contains a short overview on solving the so-called Helmholtz equation with the help of finite elements. The main part of Chapter 2 is dedicated to the analysis of a one-dimensional model problem containing the main idea of a new model for laser cavity eigenmodes which is derived in detail in Chapter 3. Chapter 4 comprises a convergence theory for the approximate solution of quadratic eigenvalue problems. In Chapter 5, a stabilized finite element discretization of the new model is described and its convergence is proved by applying the theory of Chapter 4. Chapter 6 contains computational aspects of solving the resulting system of equations and, finally, Chapter 7 presents numerical results for various configurations, demonstrating the practical relevance of our new approach.
Die fast vollständig zerlegbaren Gruppen bilden eine Teilklasse der Butlergruppen. Das Konzept des Regulators, d.h. der Durchschnitt aller regulierenden Untergruppen, ist unverzichtbar für fast vollständig zerlegbare Gruppen. Dieses Konzept lässt sich in natürlicher Weise auf die ganze Klasse der Butlergruppen fortsetzen. Allerdings lässt sich die Regulatorbildung im allgemeineren Fall der Butlergruppen a priori iterieren. Damit stellt sich erst einmal die Frage, ob es überhaupt Butlergruppen gibt mit Regulatorketten, der Länge größer als 1. Ein erstes Beispiel der Länge 2 wurde 1997 von Lehrmann und Mutzbauer konstruiert. In dieser Dissertation wurden mit konzeptionell neuen Techniken Butlergruppen mit beliebiger vorgegebener endlicher Kettenlänge angegeben. Grundsätzliche Schwierigkeiten bei diesem Unterfangen resultieren aus dem Fehlen, bzw. der Unmöglichkeit, einer kanonischen Darstellung von Butlergruppen. Man verwendet die allseits gebrauchte Summendarstellung für Butlergruppen. Genau an dieser Stelle bedarf es völlig neuer Methoden, verglichen mit den fast vollständig zerlegbaren Gruppen mit ihrer kanonischen Regulatordarstellung. Alle Teilaufgaben bei der anstehenden Konstruktion von Butlergruppen, die für fast vollständig zerlegbare Gruppen Standard sind, werden hierbei problematisch, u.a. die Bildung reiner Hüllen, die Bestimmung regulierender Untergruppen und die Regulatorbildung.
An exhaustive discussion of constraint qualifications (CQ) and stationarity concepts for mathematical programs with equilibrium constraints (MPEC) is presented. It is demonstrated that all but the weakest CQ, Guignard CQ, are too strong for a discussion of MPECs. Therefore, MPEC variants of all the standard CQs are introduced and investigated. A strongly stationary point (which is simply a KKT-point) is seen to be a necessary first order optimality condition only under the strongest CQs, MPEC-LICQ, MPEC-SMFCQ and Guignard CQ. Therefore a whole set of KKT-type conditions is investigated. A simple approach is given to acquire A-stationarity to be a necessary first order condition under MPEC-Guiganrd CQ. Finally, a whole chapter is devoted to investigating M-stationary, among the strongest stationarity concepts, second only to strong stationarity. It is shown to be a necessary first order condition under MPEC-Guignard CQ, the weakest known CQ for MPECs.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
In dieser Arbeit wird der Bau der (abzählbaren) abelschen p-Gruppen untersucht, durch die Betrachtung der dazugehörigen Quasibasen, die als bestimmte erzeugende Systeme der gegebenen p-Gruppe definiert sind. Die Untersuchung wird insbesondere auf die nichtseparablen p-Gruppen und ihre induktiven Quasibasen bezogen.
A Lie algebraic generalization of the classical and the Sort-Jacobi algorithm for diagonalizing a symmetric matrix has been proposed. The coordinate free setting provides new insights in the nature of Jacobi-type methods and allows a unified treatment of several structured eigenvalue and singular value problems, including so far unstudied normal form problems. Local quadratic convergence has been shown for both types of Jacobi methods with a fully comprehension of the regular and irregular case. New sweep methods have been introduced that generalize the special cyclic sweep for symmetric matrices and ensure local quadratic convergence also for irregular elements. The new sweep methods yield faster convergence behavior than the previously known cyclic schemes.
In this thesis affine-scaling-methods for two different types of mathematical problems are considered. The first type of problems are nonlinear optimization problems subject to bound constraints. A class of new affine-scaling Newton-type methods is introduced. The methods are shown to be locally quadratically convergent without assuming strict complementarity of the solution. The new methods differ from previous ones mainly in the choice of the scaling matrix. The second type of problems are semismooth system of equations with bound constraints. A new affine-scaling trust-region method for these problems is developed. The method is shown to have strong global and local convergence properties under suitable assumptions. Numerical results are presented for a number of problems arising from different areas.
The investigation of multivariate generalized Pareto distributions (GPDs) in the framework of extreme value theory has begun only lately. Recent results show that they can, as in the univariate case, be used in Peaks over Threshold approaches. In this manuscript we investigate the definition of GPDs from Section 5.1 of Falk et al. (2004), which does not differ in the area of interest from those of other authors. We first show some theoretical properties and introduce important examples of GPDs. For the further investigation of these distributions simulation methods are an important part. We describe several methods of simulating GPDs, beginning with an efficient method for the logistic GPD. This algorithm is based on the Shi transformation, which was introduced by Shi (1995) and was used in Stephenson (2003) for the simulation of multivariate extreme value distributions of logistic type. We also present nonparametric and parametric estimation methods in GPD models. We estimate the angular density nonparametrically in arbitrary dimension, where the bivariate case turns out to be a special case. The asymptotic normality of the corresponding estimators is shown. Also in the parametric estimations, which are mainly based on maximum likelihood methods, the asymptotic normality of the estimators is shown under certain regularity conditions. Finally the methods are applied to a real hydrological data set containing water discharges of the rivers Altmühl and Danube in southern Bavaria.
This thesis is concerned with numerical methods for solving nonlinear and mixed complementarity problems. Such problems arise from a variety of applications such as equilibria models of economics, contact and structural mechanics problems, obstacle problems, discrete-time optimal control problems etc. In this thesis we present a new formulation of nonlinear and mixed complementarity problems based on the Fischer-Burmeister function approach. Unlike traditional reformulations, our approach leads to an over-determined system of nonlinear equations. This has the advantage that certain drawbacks of the Fischer-Burmeister approach are avoided. Among other favorable properties of the new formulation, the natural merit function turns out to be differentiable. To solve the arising over-determined system we use a nonsmooth damped Levenberg-Marquardt-type method and investigate its convergence properties. Under mild assumptions, it can be shown that the global and local fast convergence results are similar to some of the better equation-based method. Moreover, the new method turns out to be significantly more robust than the corresponding equation-based method. For the case of large complementarity problems, however, the performance of this method suffers from the need for solving the arising linear least squares problem exactly at each iteration. Therefore, we suggest a modified version which allows inexact solutions of the least squares problems by using an appropriate iterative solver. Under certain assumptions, the favorable convergence properties of the original method are preserved. As an alternative method for mixed complementarity problems, we consider a box constrained least squares formulation along with a projected Levenberg-Marquardt-type method. To globalize this method, trust region strategies are proposed. Several ingredients are used to improve this approach: affine scaling matrices and multi-dimensional filter techniques. Global convergence results as well as local superlinear/quadratic convergence are shown under appropriate assumptions. Combining the advantages of the new methods, a new software for solving mixed complementarity problems is presented.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS Statistical Analysis System). Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first two chapters can be dealt with in the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 3, 4 and 5 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific part, including the diagrams generated with SAS, always starts with a computer symbol, representing the beginning of a session at the computer, and ends with a printer symbol for the end of this session. This book is an open source project under the GNU Free Documentation License.
A torsion free abelian group of finite rank is called almost completely decomposable if it has a completely decomposable subgroup of finite index. A p-local, p-reduced almost completely decomposable group of type (1,2) is briefly called a (1,2)-group. Almost completely decomposable groups can be represented by matrices over the ring Z/hZ, where h is the exponent of the regulator quotient. This particular choice of representation allows for a better investigation of the decomposability of the group. Arnold and Dugas showed in several of their works that (1,2)-groups with regulator quotient of exponent at least p^7 allow infinitely many isomorphism types of indecomposable groups. It is not known if the exponent 7 is minimal. In this dissertation, this problem is addressed.
This work studies the convergence of trajectories of gradient-like systems. In the first part of this work continuous-time gradient-like systems are examined. Results on the convergence of integral curves of gradient systems to single points of Lojasiewicz and Kurdyka are extended to a class of gradient-like vector fields and gradient-like differential inclusions. In the second part of this work discrete-time gradient-like optimization methods on manifolds are studied. Methods for smooth and for nonsmooth optimization problems are considered. For these methods some convergence results are proven. Additionally the optimization methods for nonsmooth cost functions are applied to sphere packing problems on adjoint orbits.
In der vorliegenden Arbeit werden lineare Systeme elliptischer partieller Differentialgleichungen in schwacher Formulierung auf konischen Gebieten untersucht. Auf einem zunächst unbeschränkten Kegelgebiet betrachten wir den Fall beschränkter und nur von den Winkelvariablen abhängiger Koeffizientenfunktionen. Die durch selbige definierte Bilinearform genüge einer Gårdingschen Ungleichung. In gewichteten Sobolevräumen werden Existenz- und Eindeutigkeitsfragen geklärt, wobei das Problem mittels Fouriertransformation auf eine von einem komplexen Parameter abhängige Familie T(·) von Fredholmoperatoren zurückgeführt wird. Unter Anwendung des Residuenkalküls gewinnen wir eine Darstellung der Lösung in Form einer Zerlegung in einen glatten Anteil einerseits sowie eine endliche Summe von Singulärfunktionen andererseits. Durch Abschneidetechniken werden die gewonnenen Erkenntnisse auf den Fall schwach formulierter elliptischer Systeme auf beschränkten Kegelgebieten unter Formulierung in gewöhnlichen, nicht-gewichteten Sobolevräumen angewendet. Die für Regularitätsfragen maßgeblichen Eigenwerte der Operatorfunktion T mit minimalem positiven Imaginärteil werden im letzten Kapitel der Arbeit am Beispiel der ebenen elastischen Gleichungen numerisch bestimmt.
Many optimization problems for a smooth cost function f on a manifold M can be solved by determining the zeros of a vector field F; such as e.g. the gradient F of the cost function f. If F does not depend on additional parameters, numerous zero-finding techniques are available for this purpose. It is a natural generalization however, to consider time-dependent optimization problems that require the computation of time-varying zeros of time-dependent vector fields F(x,t). Such parametric optimization problems arise in many fields of applied mathematics, in particular path-following problems in robotics, recursive eigenvalue and singular value estimation in signal processing, as well as numerical linear algebra and inverse eigenvalue problems in control theory. In the literature, there are already some tracking algorithms for these tasks, but these do not always adequately respect the manifold structure. Hence, available tracking results can often be improved by implementing methods working directly on the manifold. Thus, intrinsic methods are of interests that evolve during the entire computation on the manifold. It is the task of this thesis, to develop such intrinsic zero finding methods. The main results of this thesis are as follows: - A new class of continuous and discrete tracking algorithms is proposed for computing zeros of time-varying vector fields on Riemannian manifolds. This was achieved by studying the newly introduced time-varying Newton Flow and the time-varying Newton Algorithm on Riemannian manifolds. - Convergence analysis is performed on arbitrary Riemannian manifolds. - Concretization of these results on submanifolds, including for a new class of algorithms via local parameterizations. - More specific results in Euclidean space are obtained by considering inexact and underdetermined time-varying Newton Flows. - Illustration of these newly introduced algorithms by examining time-varying tracking tasks in three application areas: Subspace analysis, matrix decompositions (in particular EVD and SVD) and computer vision.
Das Hauptgebiet der Arbeit stellt die Approximation der Lösungen partieller Differentialgleichungen mit Dirichlet-Randbedingungen durch Splinefunktionen dar. Partielle Differentialgleichungen finden ihre Anwendung beispielsweise in Bereichen der Elektrostatik, der Elastizitätstheorie, der Strömungslehre sowie bei der Untersuchung der Ausbreitung von Wärme und Schall. Manche Approximationsaufgaben besitzen keine eindeutige Lösung. Durch Anwendung der Penalized Least Squares Methode wurde gezeigt, dass die Eindeutigkeit der gesuchten Lösung von gewissen Minimierungsaufgaben sichergestellt werden kann. Unter Umständen lässt sich sogar eine höhere Stabilität des numerischen Verfahrens gewinnen. Für die numerischen Betrachtungen wurde ein umfangreiches, effizientes C-Programm erstellt, welches die Grundlage zur Bestätigung der theoretischen Voraussagen mit den praktischen Anwendungen bildete.
The incidence matrices of many combinatorial structures satisfy the so called rectangular rule, i.e., the scalar product of any two lines of the matrix is at most 1. We study a class of matrices with rectangular rule, the regular block matrices. Some regular block matrices are submatrices of incidence matrices of finite projective planes. Necessary and sufficient conditions are given for regular block matrices, to be submatrices of projective planes. Moreover, regular block matrices are related to another combinatorial structure, the symmetric configurations. In particular, it turns out, that we may conclude the existence of several symmetric configurations from the existence of a projective plane, using this relationship.
We investigate iterative numerical algorithms with shifts as nonlinear discrete-time control systems. Our approach is based on the interpretation of reachable sets as orbits of the system semigroup. In the first part we develop tools for the systematic analysis of the structure of reachable sets of general invertible discrete-time control systems. Therefore we merge classical concepts, such as geometric control theory, semigroup actions and semialgebraic geometry. Moreover, we introduce new concepts such as right divisible systems and the repelling phenomenon. In the second part we apply the semigroup approach to the investigation of concrete numerical iteration schemes. We extend the known results about the reachable sets of classical inverse iteration. Moreover, we investigate the structure of reachable sets and systemgroup orbits of inverse iteration on flag manifolds and Hessenberg varieties, rational iteration schemes, Richardson's method and linear control schemes. In particular we obtain necessary and sufficient conditions for controllability and the appearance of repelling phenomena. Furthermore, a new algorithm for solving linear equations (LQRES) is derived.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
Bei vielen Fragestellungen, in denen sich eine Grundgesamtheit in verschiedene Klassen unterteilt, ist weniger die relative Klassengröße als vielmehr die Anzahl der Klassen von Bedeutung. So interessiert sich beispielsweise der Biologe dafür, wie viele Spezien einer Gattung es gibt, der Numismatiker dafür, wie viele Münzen oder Münzprägestätten es in einer Epoche gab, der Informatiker dafür, wie viele unterschiedlichen Einträge es in einer sehr großen Datenbank gibt, der Programmierer dafür, wie viele Fehler eine Software enthält oder der Germanist dafür, wie groß der Wortschatz eines Autors war oder ist. Dieser Artenreichtum ist die einfachste und intuitivste Art und Weise eine Population oder Grundgesamtheit zu charakterisieren. Jedoch kann nur in Kollektiven, in denen die Gesamtanzahl der Bestandteile bekannt und relativ klein ist, die Anzahl der verschiedenen Spezien durch Erfassung aller bestimmt werden. In allen anderen Fällen ist es notwendig die Spezienanzahl durch Schätzungen zu bestimmen.
It is well-known that a multivariate extreme value distribution can be represented via the D-Norm. However not every norm yields a D-Norm. In this thesis a necessary and sufficient condition is given for a norm to define an extreme value distribution. Applications of this theorem includes a new proof for the bivariate case, the Pickands dependence function and the nested logistic model. Furthermore the GPD-Flow is introduced and first insights were given such that if it converges it converges against the copula of complete dependence.
A new class of optimization problems name 'mathematical programs with vanishing constraints (MPVCs)' is considered. MPVCs are on the one hand very challenging from a theoretical viewpoint, since standard constraint qualifications such as LICQ, MFCQ, or ACQ are most often violated, and hence, the Karush-Kuhn-Tucker conditions do not provide necessary optimality conditions off-hand. Thus, new CQs and the corresponding optimality conditions are investigated. On the other hand, MPVCs have important applications, e.g., in the field of topology optimization. Therefore, numerical algorithms for the solution of MPVCs are designed, investigated and tested for certain problems from truss-topology-optimization.
In the generalized Nash equilibrium problem not only the cost function of a player depends on the rival players' decisions, but also his constraints. This thesis presents different iterative methods for the numerical computation of a generalized Nash equilibrium, some of them globally, others locally superlinearly convergent. These methods are based on either reformulations of the generalized Nash equilibrium problem as an optimization problem, or on a fixed point formulation. The key tool for these reformulations is the Nikaido-Isoda function. Numerical results for various problem from the literature are given.
Mathematische Programme mit Gleichgewichtsrestriktionen (oder Komplementaritätsbedingungen), kurz MPECs, sind als äußerst schwere Optimierungsprobleme bekannt. Lokale Minima oder geeignete stationäre Punkte zu finden, ist ein nichttriviales Problem. Diese Arbeit beschreibt, wie man dennoch die spezielle Struktur von MPECs ausnutzen kann und mittels eines Branch-and-Bound-Verfahrens ein globales Minimum von Linearen Programmen mit Gleichgewichtsrestriktionen, kurz LPECs, bekommt. Des Weiteren wird dieser Branch-and-Bound-Algorithmus innerhalb eines Filter-SQPEC-Verfahrens genutzt, um allgemeine MPECs zu lösen. Für das Filter-SQPEC Verfahren wird ein globaler Konvergenzsatz bewiesen. Außerdem werden für beide Verfahren numerische Resultate angegeben.
It is well known, that the least squares estimator performs poorly in the presence of multicollinearity. One way to overcome this problem is using biased estimators, e.g. ridge regression estimators. In this study an estimation procedure is proposed based on adding a small quantity omega on some or each regressor. The resulting biased estimator is described in dependence of omega and furthermore it is shown that its mean squared error is smaller than the one corresponding to the least squares estimator in the case of highly correlated regressors.
We discuss exceptional polynomials, i.e. polynomials over a finite field $k$ that induce bijections over infinitely many finite extensions of $k$. In the first chapters we give the theoretical background to characterize this class of polynomials with Galois theoretic means. This leads to the notion of arithmetic resp. geometric monodromy groups. In the remaining chapters we restrict our attention to polynomials with primitive affine arithmetic monodromy group. We first classify all exceptional polynomials with the fixed field of the affine kernel of the arithmetic monodromy group being of genus less or equal to 2. Next we show that every full affine group can be realized as the monodromy group of a polynomial. In the remaining chapters we classify affine polynomials of a given degree.
Controllability Aspects of the Lindblad-Kossakowski Master Equation : A Lie-Theoretical Approach
(2009)
One main task, which is considerably important in many applications in quantum control, is to explore the possibilities of steering a quantum system from an initial state to a target state. This thesis focuses on fundamental control-theoretical issues of quantum dynamics described by the Lindblad-Kossakowski master equation which arises as a bilinear control system on some underlying real vector spaces, e.g controllability aspects and the structure of reachable sets. Based on Lie-algebraic methods from nonlinear control theory, the thesis presents a unified approach to control problems of finite dimensional closed and open quantum systems. In particular, a simplified treatment for controllability of closed quantum systems as well as new accessibility results for open quantum systems are obtained. The main tools to derive the results are the well-known classifications of all matrix Lie groups which act transitively on Grassmann manifolds, and respectively, on real vector spaces without the origin. It is also shown in this thesis that accessibiity of the Lindblad-Kossakowski master equation is a generic property. Moreover, based on the theoretical accessibility results, an algorithm is developed to decide when the Lindblad-Kossakowski master equation is accessible.
We study reachability matrices R(A, b) = [b,Ab, . . . ,An−1b], where A is an n × n matrix over a field K and b is in Kn. We characterize those matrices that are reachability matrices for some pair (A, b). In the case of a cyclic matrix A and an n-vector of indeterminates x, we derive a factorization of the polynomial det(R(A, x)).
This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of Würzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.
On the Fragility Index
(2011)
The Fragility Index captures the amount of risk in a stochastic system of arbitrary dimension. Its main mathematical tool is the asymptotic distribution of exceedance counts within the system which can be derived by use of multivariate extreme value theory. Thereby the basic assumption is that data comes from a distribution which lies in the domain of attraction of a multivariate extreme value distribution. The Fragility Index itself and its extension can serve as a quantitative measure for tail dependence in arbitrary dimensions. It is linked to the well known extremal index for stochastic processes as well the extremal coefficient of an extreme value distribution.
In this thesis different algorithms for the solution of generalized Nash equilibrium problems with the focus on global convergence properties are developed. A globalized Newton method for the computation of normalized solutions, a nonsmooth algorithm based on an optimization reformulation of the game-theoretic problem, and a merit function approach and an interior point method for the solution of the concatenated Karush-Kuhn-Tucker-system are analyzed theoretically and numerically. The interior point method turns out to be one of the best existing methods for the solution of generalized Nash equilibrium problems.
In the following dissertation we consider three preconditioners of algebraic multigrid type, though they are defined for arbitrary prolongation and restriction operators, we consider them in more detail for the aggregation method. The strengthened Cauchy-Schwarz inequality and the resulting angle between the spaces will be our main interests. In this context we will introduce some modifications. For the problem of the one-dimensional convection we obtain perfect theoretical results. Although this is not the case for more complex problems, the numerical results we present will show that the modifications are also useful in these situation. Additionally, we will consider a symmetric problem in the energy norm and present a simple rule for algebraic aggregation.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
In this thesis we consider a reactive transport model with precipitation dissolution reactions from the geosciences. It consists of PDEs, ODEs, algebraic equations (AEs) and complementary conditions (CCs). After discretization of this model we get a huge nonlinear and nonsmooth equation system. We tackle this system with the semismooth Newton method introduced by Qi and Sun. The focus of this thesis is on the application and convergence of this algorithm. We proof that this algorithm is well defined for this problem and local even quadratic convergent for a BD-regular solution. We also deal with the arising linear equation systems, which are large and sparse, and how they can be solved efficiently. An integral part of this investigation is the boundedness of a certain matrix-valued function, which is shown in a separate chapter. As a side quest we study how extremal eigenvalues (and singular values) of certain PDE-operators, which are involved in our discretized model, can be estimated accurately.
The subject of this thesis are mathematical programs with complementarity conditions (MPCC). At first, an economic example of this problem class is analyzed, the problem of effort maximization in asymmetric n-person contest games. While an analytical solution for this special problem could be derived, this is not possible in general for MPCCs. Therefore, optimality conditions which might be used for numerical approaches where considered next. More precisely, a Fritz-John result for MPCCs with stronger properties than those known so far was derived together with some new constraint qualifications and subsequently used to prove an exact penalty result. Finally, to solve MPCCs numerically, the so called relaxation approach was used. Besides improving the results for existing relaxation methods, a new relaxation with strong convergence properties was suggested and a numerical comparison of all methods based on the MacMPEC collection conducted.
We introduce some mathematical framework for extreme value theory in the space of continuous functions on compact intervals and provide basic definitions and tools. Continuous max-stable processes on [0,1] are characterized by their “distribution functions” G which can be represented via a norm on function space, called D-norm. The high conformity of this setup with the multivariate case leads to the introduction of a functional domain of attraction approach for stochastic processes, which is more general than the usual one based on weak convergence. We also introduce the concept of “sojourn time transformation” and compare several types of convergence on function space. Again in complete accordance with the uni- or multivariate case it is now possible to get functional generalized Pareto distributions (GPD) W via W = 1 + log(G) in the upper tail. In particular, this enables us to derive characterizations of the functional domain of attraction condition for copula processes. Moreover, we investigate the sojourn time above a high threshold of a continuous stochastic process. It turns out that the limit, as the threshold increases, of the expected sojourn time given that it is positive, exists if the copula process corresponding to Y is in the functional domain of attraction of a max-stable process. If the process is in a certain neighborhood of a generalized Pareto process, then we can replace the constant threshold by a general threshold function and we can compute the asymptotic sojourn time distribution.
Mathematica ist ein hervorragendes Programm um mathematische Berechnungen – auch sehr komplexe – auf relativ einfache Art und Weise durchführen zu lassen. Dieses Skript soll eine wirklich kurze Einführung in Mathematica geben und als Nachschlagewerk einiger gängiger Anwendungen von Mathematica dienen. Dabei wird folgende Grobgliederung verwendet: - Grundlagen: Graphische Oberfläche, einfache Berechnungen, Formeleingabe - Bedienung: Vorstellung einiger Kommandos und Einblick in die Funktionsweise - Praxis: Beispielhafte Berechnung einiger Abitur- und Übungsaufgaben
In this thesis, time-optimal control of the bi-steerable robot is addressed. The bi-steerable robot, a vehicle with two independently steerable axles, is a complex nonholonomic system with applications in many areas of land-based robotics. Motion planning and optimal control are challenging tasks for this system, since standard control schemes do not apply. The model of the bi-steerable robot considered here is a reduced kinematic model with the driving velocity and the steering angles of the front and rear axle as inputs. The steering angles of the two axles can be set independently from each other. The reduced kinematic model is a control system with affine and non-affine inputs, as the driving velocity enters the system linearly, whereas the steering angles enter nonlinearly. In this work, a new approach to solve the time-optimal control problem for the bi-steerable robot is presented. In contrast to most standard methods for time-optimal control, our approach does not exclusively rely on discretization and purely numerical methods. Instead, the Pontryagin Maximum Principle is used to characterize candidates for time-optimal solutions. The resultant boundary value problem is solved by optimization to obtain solutions to the path planning problem over a given time horizon. The time horizon is decreased and the path planning is iterated to approximate a time-optimal solution. An optimality condition is introduced which depends on the number of cusps, i.e., reversals of the driving direction of the robot. This optimality condition allows to single out non-optimal solutions with too many cusps. In general, our approach only gives approximations of time-optimal solutions, since only normal regular extremals are considered as solutions to the path planning problem, and the path planning is terminated when an extremal with minimal number of cusps is found. However, for most desired configurations, normal regular extremals with the minimal number of cusps provide time-optimal solutions for the bi-steerable robot. The convergence of the approach is analyzed and its probabilistic completeness is shown. Moreover, simulation results on time-optimal solutions for the bi-steerable robot are presented.
This thesis is devoted to numerical verification of optimality conditions for non-convex optimal control problems. In the first part, we are concerned with a-posteriori verification of sufficient optimality conditions. It is a common knowledge that verification of such conditions for general non-convex PDE-constrained optimization problems is very challenging. We propose a method to verify second-order sufficient conditions for a general class of optimal control problem. If the proposed verification method confirms the fulfillment of the sufficient condition then a-posteriori error estimates can be computed. A special ingredient of our method is an error analysis for the Hessian of the underlying optimization problem. We derive conditions under which positive definiteness of the Hessian of the discrete problem implies positive definiteness of the Hessian of the continuous problem. The results are complemented with numerical experiments. In the second part, we investigate adaptive methods for optimal control problems with finitely many control parameters. We analyze a-posteriori error estimates based on verification of second-order sufficient optimality conditions using the method developed in the first part. Reliability and efficiency of the error estimator are shown. We illustrate through numerical experiments, the use of the estimator in guiding adaptive mesh refinement.
The analysis of real data by means of statistical methods with the aid of a software package common in industry and administration usually is not an integral part of mathematics studies, but it will certainly be part of a future professional work. The present book links up elements from time series analysis with a selection of statistical procedures used in general practice including the statistical software package SAS. Consequently this book addresses students of statistics as well as students of other branches such as economics, demography and engineering, where lectures on statistics belong to their academic training. But it is also intended for the practician who, beyond the use of statistical tools, is interested in their mathematical background. Numerous problems illustrate the applicability of the presented statistical procedures, where SAS gives the solutions. The programs used are explicitly listed and explained. No previous experience is expected neither in SAS nor in a special computer system so that a short training period is guaranteed. This book is meant for a two semester course (lecture, seminar or practical training) where the first three chapters can be dealt within the first semester. They provide the principal components of the analysis of a time series in the time domain. Chapters 4, 5 and 6 deal with its analysis in the frequency domain and can be worked through in the second term. In order to understand the mathematical background some terms are useful such as convergence in distribution, stochastic convergence, maximum likelihood estimator as well as a basic knowledge of the test theory, so that work on the book can start after an introductory lecture on stochastics. Each chapter includes exercises. An exhaustive treatment is recommended. Chapter 7 (case study) deals with a practical case and demonstrates the presented methods. It is possible to use this chapter independent in a seminar or practical training course, if the concepts of time series analysis are already well understood. This book is consecutively subdivided in a statistical part and an SAS-specific part. For better clearness the SAS-specific parts are highlighted. This book is an open source project under the GNU Free Documentation License.
In the verification of positive Harris recurrence of multiclass queueing networks the stability analysis for the class of fluid networks is of vital interest. This thesis addresses stability of fluid networks from a Lyapunov point of view. In particular, the focus is on converse Lyapunov theorems. To gain an unified approach the considerations are based on generic properties that fluid networks under widely used disciplines have in common. It is shown that the class of closed generic fluid network models (closed GFNs) is too wide to provide a reasonable Lyapunov theory. To overcome this fact the class of strict generic fluid network models (strict GFNs) is introduced. In this class it is required that closed GFNs satisfy additionally a concatenation and a lower semicontinuity condition. We show that for strict GFNs a converse Lyapunov theorem is true which provides a continuous Lyapunov function. Moreover, it is shown that for strict GFNs satisfying a trajectory estimate a smooth converse Lyapunov theorem holds. To see that widely used queueing disciplines fulfill the additional conditions, fluid networks are considered from a differential inclusions perspective. Within this approach it turns out that fluid networks under general work-conserving, priority and proportional processor-sharing disciplines define strict GFNs. Furthermore, we provide an alternative proof for the fact that the Markov process underlying a multiclass queueing network is positive Harris recurrent if the associate fluid network defining a strict GFN is stable. The proof explicitely uses the Lyapunov function admitted by the stable strict GFN. Also, the differential inclusions approach shows that first-in-first-out disciplines play a special role.
We study the symmetrised rank-one convex hull of monoclinic-I martensite (a twelve-variant material) in the context of geometrically-linear elasticity. We construct sets of T3s, which are (non-trivial) symmetrised rank-one convex hulls of 3-tuples of pairwise incompatible strains. Moreover we construct a five-dimensional continuum of T3s and show that its intersection with the boundary of the symmetrised rank-one convex hull is four-dimensional. We also show that there is another kind of monoclinic-I martensite with qualitatively different semi-convex hulls which, so far as we know, has not been experimentally observed. Our strategy is to combine understanding of the algebraic structure of symmetrised rank-one convex cones with knowledge of the faceting structure of the convex polytope formed by the strains.
Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks.
Argumentation and proof have played a fundamental role in mathematics education in recent years. The author of this dissertation would like to investigate the development of the proving process within a dynamic geometry system in order to support tertiary students understanding the proving process. The strengths of this dynamic system stimulate students to formulate conjectures and produce arguments during the proving process. Through empirical research, we classified different levels of proving and proposed a methodological model for proving. This methodological model makes a contribution to improve students’ levels of proving and develop their dynamic visual thinking. We used Toulmin model of argumentation as a theoretical model to analyze the relationship between argumentation and proof. This research also offers some possible explanation so as to why students have cognitive difficulties in constructing proofs and provides mathematics educators with a deeper understanding on the proving process within a dynamic geometry system.
Applications in various research areas such as signal processing, quantum computing, and computer vision, can be described as constrained optimization tasks on certain subsets of tensor products of vector spaces. In this work, we make use of techniques from Riemannian geometry and analyze optimization tasks on subsets of so-called simple tensors which can be equipped with a differentiable structure. In particular, we introduce a generalized Rayleigh-quotient function on the tensor product of Grassmannians and on the tensor product of Lagrange- Grassmannians. Its optimization enables a unified approach to well-known tasks from different areas of numerical linear algebra, such as: best low-rank approximations of tensors (data compression), computing geometric measures of entanglement (quantum computing) and subspace clustering (image processing). We perform a thorough analysis on the critical points of the generalized Rayleigh-quotient and develop intrinsic numerical methods for its optimization. Explicitly, using the techniques from Riemannian optimization, we present two type of algorithms: a Newton-like and a conjugated gradient algorithm. Their performance is analysed and compared with established methods from the literature.
Dysfunction of dopaminergic neurotransmission has been implicated in HIV infection. We showed previously increased dopamine (DA) levels in CSF of therapy-naïve HIV patients and an inverse correlation between CSF DA and CD4 counts in the periphery, suggesting adverse effects of high levels of DA on HIV infection. In the current study including a total of 167 HIV-positive and negative donors from Germany and South Africa (SA), we investigated the mechanistic background for the increase of CSF DA in HIV individuals. Interestingly, we found that the DAT 10/10-repeat allele is present more frequently within HIV individuals than in uninfected subjects. Logistic regression analysis adjusted for gender and ethnicity showed an odds ratio for HIV infection in DAT 10/10 allele carriers of 3.93 (95 % CI 1.72–8.96; p = 0.001, Fishers exact test). 42.6 % HIV-infected patients harbored the DAT 10/10 allele compared to only 10.5 % uninfected DAT 10/10 carriers in SA (odds ratio 6.31), whereas 68.1 versus 40.9 %, respectively, in Germany (odds ratio 3.08). Subjects homozygous for the 10-repeat allele had higher amounts of CSF DA and reduced DAT mRNA expression but similar disease severity compared with those carrying other DAT genotypes. These intriguing and novel findings show the mutual interaction between DA and HIV, suggesting caution in the interpretation of CNS DA alterations in HIV infection solely as a secondary phenomenon to the virus and open the door for larger studies investigating consequences of the DAT functional polymorphism on HIV epidemiology and progression of disease.
Human herpesvirus-6 (HHV-6) exists in latent form either as a nuclear episome or integrated into human chromosomes in more than 90% of healthy individuals without causing clinical symptoms. Immunosuppression and stress conditions can reactivate HHV-6 replication, associated with clinical complications and even death. We have previously shown that co-infection of Chlamydia trachomatis and HHV-6 promotes chlamydial persistence and increases viral uptake in an in vitro cell culture model. Here we investigated C. trachomatis-induced HHV-6 activation in cell lines and fresh blood samples from patients having Chromosomally integrated HHV-6 (CiHHV-6). We observed activation of latent HHV-6 DNA replication in CiHHV-6 cell lines and fresh blood cells without formation of viral particles. Interestingly, we detected HHV-6 DNA in blood as well as cervical swabs from C. trachomatis-infected women. Low virus titers correlated with high C. trachomatis load and vice versa, demonstrating a potentially significant interaction of these pathogens in blood cells and in the cervix of infected patients. Our data suggest a thus far underestimated interference of HHV-6 and C. trachomatis with a likely impact on the disease outcome as consequence of co-infection.
The work at hand studies problems from Loewner theory and is divided into two parts:
In part 1 (chapter 2) we present the basic notions of Loewner theory. Here we use a modern form which was developed by F. Bracci, M. Contreras, S. Díaz-Madrigal et al. and which can be applied to certain higher dimensional complex manifolds.
We look at two domains in more detail: the Euclidean unit ball and the polydisc. Here we consider two classes of biholomorphic mappings which were introduced by T. Poreda and G. Kohr as generalizations of the class S.
We prove a conjecture of G. Kohr about support points of these classes. The proof relies on the observation that the classes describe so called Runge domains, which follows from a result by L. Arosio, F. Bracci and E. F. Wold.
Furthermore, we prove a conjecture of G. Kohr about support points of a class of biholomorphic mappings that comes from applying the Roper-Suffridge extension operator to the class S.
In part 2 (chapter 3) we consider one special Loewner equation: the chordal multiple-slit equation in the upper half-plane.
After describing basic properties of this equation we look at the problem, whether one can choose the coefficient functions in this equation to be constant. D. Prokhorov proved this statement under the assumption that the slits are piecewise analytic. We use a completely different idea to solve the problem in its general form.
As the Loewner equation with constant coefficients holds everywhere (and not just almost everywhere), this result generalizes Loewner’s original idea to the multiple-slit case.
Moreover, we consider the following problems:
• The “simple-curve problem” asks which driving functions describe the growth of simple curves (in contrast to curves that touch itself). We discuss necessary and sufficient conditions, generalize a theorem of J. Lind, D. Marshall and S. Rohde to the multiple-slit equation and we give an example of a set of driving functions which generate simple curves because of a certain self-similarity property.
• We discuss properties of driving functions that generate slits which enclose a given angle with the real axis.
• A theorem by O. Roth gives an explicit description of the reachable set of one point in the radial Loewner equation. We prove the analog for the chordal equation.
Background
Referring to individuals with reactivity to honey bee and Vespula venom in diagnostic tests, the umbrella terms “double sensitization” or “double positivity” cover patients with true clinical double allergy and those allergic to a single venom with asymptomatic sensitization to the other. There is no international consensus on whether immunotherapy regimens should generally include both venoms in double sensitized patients.
Objective
We investigated the long-term outcome of single venom-based immunotherapy with regard to potential risk factors for treatment failure and specifically compared the risk of relapse in mono sensitized and double sensitized patients.
Methods
Re-sting data were obtained from 635 patients who had completed at least 3 years of immunotherapy between 1988 and 2008. The adequate venom for immunotherapy was selected using an algorithm based on clinical details and the results of diagnostic tests.
Results
Of 635 patients, 351 (55.3%) were double sensitized to both venoms. The overall re-exposure rate to Hymenoptera stings during and after immunotherapy was 62.4%; the relapse rate was 7.1% (6.0% in mono sensitized, 7.8% in double sensitized patients). Recurring anaphylaxis was statistically less severe than the index sting reaction (P = 0.004). Double sensitization was not significantly related to relapsing anaphylaxis (P = 0.56), but there was a tendency towards an increased risk of relapse in a subgroup of patients with equal reactivity to both venoms in diagnostic tests (P = 0.15).
Conclusions
Single venom-based immunotherapy over 3 to 5 years effectively and long-lastingly protects the vast majority of both mono sensitized and double sensitized Hymenoptera venom allergic patients. Double venom immunotherapy is indicated in clinically double allergic patients reporting systemic reactions to stings of both Hymenoptera and in those with equal reactivity to both venoms in diagnostic tests who have not reliably identified the culprit stinging insect.
Purpose: Scarring after glaucoma filtering surgery remains the most frequent cause for bleb failure. The aim of this study was to assess if the postoperative injection of bevacizumab reduces the number of postoperative subconjunctival 5-fluorouracil (5-FU) injections. Further, the effect of bevacizumab as an adjunct to 5-FU on the intraocular pressure (IOP) outcome, bleb morphology, postoperative medications, and complications was evaluated.
Methods: Glaucoma patients (N = 61) who underwent trabeculectomy with mitomycin C were analyzed retrospectively (follow-up period of 25 ± 19 months). Surgery was performed exclusively by one experienced glaucoma specialist using a standardized technique. Patients in group 1 received subconjunctival applications of 5-FU postoperatively. Patients in group 2 received 5-FU and subconjunctival injection of bevacizumab.
Results: Group 1 had 6.4 ± 3.3 (0–15) (mean ± standard deviation and range, respectively) 5-FU injections. Group 2 had 4.0 ± 2.8 (0–12) (mean ± standard deviation and range, respectively) 5-FU injections. The added injection of bevacizumab significantly reduced the mean number of 5-FU injections by 2.4 ± 3.08 (P ≤ 0.005). There was no significantly lower IOP in group 2 when compared to group 1. A significant reduction in vascularization and in cork screw vessels could be found in both groups (P < 0.0001, 7 days to last 5-FU), yet there was no difference between the two groups at the last follow-up. Postoperative complications were significantly higher for both groups when more 5-FU injections were applied. (P = 0.008). No significant difference in best corrected visual acuity (P = 0.852) and visual field testing (P = 0.610) between preoperative to last follow-up could be found between the two groups.
Conclusion: The postoperative injection of bevacizumab reduced the number of subconjunctival 5-FU injections significantly by 2.4 injections. A significant difference in postoperative IOP reduction, bleb morphology, and postoperative medication was not detected.
This thesis gives an overview over mathematical modeling of complex fluids with the discussion of underlying mechanical principles, the introduction of the energetic variational framework, and examples and applications. The purpose is to present a formal energetic variational treatment of energies corresponding to the models of physical phenomena and to derive PDEs for the complex fluid systems. The advantages of this approach over force-based modeling are, e.g., that for complex systems energy terms can be established in a relatively easy way, that force components within a system are not counted twice, and that this approach can naturally combine effects on different scales. We follow a lecture of Professor Dr. Chun Liu from Penn State University, USA, on complex fluids which he gave at the University of Wuerzburg during his Giovanni Prodi professorship in summer 2012. We elaborate on this lecture and consider also parts of his work and publications, and substantially extend the lecture by own calculations and arguments (for papers including an overview over the energetic variational treatment see [HKL10], [Liu11] and references therein).
In this thesis it is shown how the spread of infectious diseases can be described via mathematical models that show the dynamic behavior of epidemics. Ordinary differential equations are used for the modeling process. SIR and SIRS models are distinguished, depending on whether a disease confers immunity to individuals after recovery or not. There are characteristic parameters for each disease like the infection rate or the recovery rate. These parameters indicate how aggressive a disease acts and how long it takes for an individual to recover, respectively. In general the parameters are time-varying and depend on population groups. For this reason, models with multiple subgroups are introduced, and switched systems are used to carry out time-variant parameters.
When investigating such models, the so called disease-free equilibrium is of interest, where no infectives appear within the population. The question is whether there are conditions, under which this equilibrium is stable. Necessary mathematical tools for the stability analysis are presented. The theory of ordinary differential equations, including Lyapunov stability theory, is fundamental. Moreover, convex and nonsmooth analysis, positive systems and differential inclusions are introduced. With these tools, sufficient conditions are given for the disease-free equilibrium of SIS, SIR and SIRS systems to be asymptotically stable.
The Riemann zeta-function forms a central object in multiplicative number theory; its value-distribution encodes deep arithmetic properties of the prime numbers. Here, a crucial role is assigned to the analytic behavior of the zeta-function on the so called critical line. In this thesis we study the value-distribution of the Riemann zeta-function near and on the critical line. Amongst others we focus on the following.
PART I: A modified concept of universality, a-points near the critical line and a denseness conjecture attributed to Ramachandra.
The critical line is a natural boundary of the Voronin-type universality property of the Riemann zeta-function. We modify Voronin's concept by adding a scaling factor to the vertical shifts that appear in Voronin's universality theorem and investigate whether this modified concept is appropriate to keep up a certain universality property of the Riemann zeta-function near and on the critical line. It turns out that it is mainly the functional equation of the Riemann zeta-function that restricts the set of functions which can be approximated by this modified concept around the critical line.
Levinson showed that almost all a-points of the Riemann zeta-function lie in a certain funnel-shaped region around the critical line. We complement Levinson's result: Relying on arguments of the theory of normal families and the notion of filling discs, we detect a-points in this region which are very close to the critical line.
According to a folklore conjecture (often attributed to Ramachandra) one expects that the values of the Riemann zeta-function on the critical line lie dense in the complex numbers. We show that there are certain curves which approach the critical line asymptotically and have the property that the values of the zeta-function on these curves are dense in the complex numbers.
Many of our results in part I are independent of the Euler product representation of the Riemann zeta-function and apply for meromorphic functions that satisfy a Riemann-type functional equation in general.
PART II: Discrete and continuous moments.
The Lindelöf hypothesis deals with the growth behavior of the Riemann zeta-function on the critical line. Due to classical works by Hardy and Littlewood, the Lindelöf hypothesis can be reformulated in terms of power moments to the right of the critical line. Tanaka showed recently that the expected asymptotic formulas for these power moments are true in a certain measure-theoretical sense; roughly speaking he omits a set of Banach density zero from the path of integration of these moments. We provide a discrete and integrated version of Tanaka's result and extend it to a large class of Dirichlet series connected to the Riemann zeta-function.
This paper presents an alternative approach for obtaining a converse Lyapunov theorem for discrete–time systems. The proposed approach is constructive, as it provides an explicit Lyapunov function. The developed converse theorem establishes existence of global Lyapunov functions for globally exponentially stable (GES) systems and semi–global practical Lyapunov functions for globally asymptotically stable systems. Furthermore, for specific classes of sys- tems, the developed converse theorem can be used to establish non–conservatism of a particular type of Lyapunov functions. Most notably, a proof that conewise linear Lyapunov functions are non–conservative for GES conewise linear systems is given and, as a by–product, tractable construction of polyhedral Lyapunov functions for linear systems is attained.
The Factorization Method is a noniterative method to detect the shape and position of conductivity anomalies inside an object. The method was introduced by Kirsch for inverse scattering problems and extended to electrical impedance tomography (EIT) by Brühl and Hanke. Since these pioneering works, substantial progress has been made on the theoretical foundations of the method. The necessary assumptions have been weakened, and the proofs have been considerably simplified. In this work, we aim to summarize this progress and present a state-of-the-art formulation of the Factorization Method for EIT with continuous data. In particular, we formulate the method for general piecewise analytic conductivities and give short and self-contained proofs.
Background
The prevalence of obesity is rising. Obesity can lead to cardiovascular and ventilatory complications through multiple mechanisms. Cardiac and pulmonary function in asymptomatic subjects and the effect of structured dietary programs on cardiac and pulmonary function is unclear.
Objective
To determine lung and cardiac function in asymptomatic obese adults and to evaluate whether weight loss positively affects functional parameters.
Methods
We prospectively evaluated bodyplethysmographic and echocardiographic data in asymptomatic subjects undergoing a structured one-year weight reduction program.
Results
74 subjects (32 male, 42 female; mean age 42±12 years) with an average BMI 42.5±7.9, body weight 123.7±24.9 kg were enrolled. Body weight correlated negatively with vital capacity (R = −0.42, p<0.001), FEV1 (R = −0.497, p<0.001) and positively with P 0.1 (R = 0.32, p = 0.02) and myocardial mass (R = 0.419, p = 0.002). After 4 months the study subjects had significantly reduced their body weight (−26.0±11.8 kg) and BMI (−8.9±3.8) associated with a significant improvement of lung function (absolute changes: vital capacity +5.5±7.5% pred., p<0.001; FEV1+9.8±8.3% pred., p<0.001, ITGV+16.4±16.0% pred., p<0.001, SR tot −17.4±41.5% pred., p<0.01). Moreover, P0.1/Pimax decreased to 47.7% (p<0.01) indicating a decreased respiratory load. The change of FEV1 correlated significantly with the change of body weight (R = −0.31, p = 0.03). Echocardiography demonstrated reduced myocardial wall thickness (−0.08±0.2 cm, p = 0.02) and improved left ventricular myocardial performance index (−0.16±0.35, p = 0.02). Mitral annular plane systolic excursion (+0.14, p = 0.03) and pulmonary outflow acceleration time (AT +26.65±41.3 ms, p = 0.001) increased.
Conclusion
Even in asymptomatic individuals obesity is associated with abnormalities in pulmonary and cardiac function and increased myocardial mass. All the abnormalities can be reversed by a weight reduction program.
The Cauchy problem for a simplified shallow elastic fluids model, one 3 x 3 system of Temple's type, is studied and a global weak solution is obtained by using the compensated compactness theorem coupled with the total variation estimates on the first and third Riemann invariants, where the second Riemann invariant is singular near the zero layer depth (rho - 0). This work extends in some sense the previous works, (Serre, 1987) and (Leveque and Temple, 1985), which provided the global existence of weak solutions for 2 x 2 strictly hyperbolic system and (Heibig, 1994) for n x n strictly hyperbolic system with smooth Riemann invariants.
Background
It is hypothesized that because of higher mast cell numbers and mediator release, mastocytosis predisposes patients for systemic immediate-type hypersensitivity reactions to certain drugs including non-steroidal anti-inflammatory drugs (NSAID).
Objective
To clarify whether patients with NSAID hypersensitivity show increased basal serum tryptase levels as sign for underlying mast cell disease.
Methods
As part of our allergy work-up, basal serum tryptase levels were determined in all patients with a diagnosis of NSAID hypersensitivity and the severity of the reaction was graded. Patients with confirmed IgE-mediated hymenoptera venom allergy served as a comparison group.
Results
Out of 284 patients with NSAID hypersensitivity, 26 were identified with basal serum tryptase > 10.0 ng/mL (9.2%). In contrast, significantly (P = .004) more hymenoptera venom allergic patients had elevated tryptase > 10.0 ng/mL (83 out of 484; 17.1%). Basal tryptase > 20.0 ng/mL was indicative for severe anaphylaxis only in venom allergic subjects (29 patients; 4x grade 2 and 25x grade 3 anaphylaxis), but not in NSAID hypersensitive patients (6 patients; 4x grade 1, 2x grade 2).
Conclusions
In contrast to hymenoptera venom allergy, NSAID hypersensitivity do not seem to be associated with elevated basal serum tryptase levels and levels > 20 ng/mL were not related to increased severity of the clinical reaction. This suggests that mastocytosis patients may be treated with NSAID without special precautions.
In this thesis we study smoothness properties of primal and dual gap functions for generalized Nash equilibrium problems (GNEPs) and finite-dimensional quasi-variational inequalities (QVIs). These gap functions are optimal value functions of primal and dual reformulations of a corresponding GNEP or QVI as a constrained or unconstrained optimization problem. Depending on the problem type, the primal reformulation uses regularized Nikaido-Isoda or regularized gap function approaches. For player convex GNEPs and QVIs of the so-called generalized `moving set' type the respective primal gap functions are continuously differentiable. In general, however, these primal gap functions are nonsmooth for both problems. Hence, we investigate their continuity and differentiability properties under suitable assumptions. Here, our main result states that, apart from special cases, all locally minimal points of the primal reformulations are points of differentiability of the corresponding primal gap function.
Furthermore, we develop dual gap functions for a class of GNEPs and QVIs and ensuing unconstrained optimization reformulations of these problems based on an idea by Dietrich (``A smooth dual gap function solution to a class of quasivariational inequalities'', Journal of Mathematical Analysis and Applications 235, 1999, pp. 380--393). For this purpose we rewrite the primal gap functions as a difference of two strongly convex functions and employ the Toland-Singer duality theory. The resulting dual gap functions are continuously differentiable and, under suitable assumptions, have piecewise smooth gradients. Our theoretical analysis is complemented by numerical experiments. The solution methods employed make use of the first-order information established by the aforementioned theoretical investigations.
The thesis ’Hurwitz’s Complex Continued Fractions - A Historical Approach and Modern Perspectives.’ deals with two branches of mathematics: Number Theory and History of Mathematics. On the first glimpse this might be unexpected, however, on the second view this is a very fruitful combination. Doing research in mathematics, it turns out to be very helpful to be aware of the beginnings and development of the corresponding subject.
In the case of Complex Continued Fractions the origins can easily be traced back to the end of the 19th century (see [Perron, 1954, vl. 1, Ch. 46]). One of their godfathers had been the famous mathematician Adolf Hurwitz. During the study of his transformation from real to complex continued fraction theory [Hurwitz, 1888], our attention was arrested by the article ’Ueber eine besondere Art der Kettenbruch-Entwicklung complexer Grössen’ [Hurwitz, 1895] from 1895 of an author called J. Hurwitz. We were not only surprised when we found out that he was the elder unknown brother Julius, furthermore, Julius Hurwitz introduced a complex continued fraction that also appeared (unmentioned) in an ergodic theoretical work from 1985 [Tanaka, 1985]. Those observations formed the Basis of our main research questions:
What is the historical background of Adolf and Julius Hurwitz and their mathematical studies? and What modern perspectives are provided by their complex continued fraction expansions?
In this work we examine complex continued fractions from various viewpoints. After a brief introduction on real continued fractions, we firstly devote ourselves to the lives of the brothers Adolf and Julius Hurwitz. Two excursions on selected historical aspects in respect to their work complete this historical chapter. In the sequel we shed light on Hurwitz’s, Adolf’s as well as Julius’, approaches to complex continued fraction expansions.
Correspondingly, in the following chapter we take a more modern perspective. Highlights are an ergodic theoretical result, namely a variation on the Döblin-Lenstra Conjecture [Bosma et al., 1983], as well as a result on transcendental numbers in tradition of Roth’s theorem [Roth, 1955]. In two subsequent chapters we are concernced with arithmetical properties of complex continued fractions. Firstly, an analogue to Marshall Hall’s Theorem from 1947 [Hall, 1947] on sums of continued fractions is derived. Secondly, a general approach on new types of continued fractions is presented building on the structural properties of lattices. Finally, in the last chapter we take up this approach and obtain an upper bound for the approximation quality of diophantine approximations by quotients of lattice points in the complex plane generalizing a method of Hermann Minkowski, improved by Hilde Gintner [Gintner, 1936], based on ideas from geometry of numbers.
In attempting to solve the regular inverse Galois problem for arbitrary subfields K of C (particularly for K=Q), a very important result by Fried and Völklein reduces the existence of regular Galois extensions F|K(t) with Galois group G to the existence of K-rational points on components of certain moduli spaces for families of covers of the projective line, known as Hurwitz spaces.
In some cases, the existence of rational points on Hurwitz spaces has been proven by theoretical criteria. In general, however, the question whether a given Hurwitz space has any rational point remains a very difficult problem. In concrete cases, it may be tackled by an explicit computation of a Hurwitz space and the corresponding family of covers.
The aim of this work is to collect and expand on the various techniques that may be used to solve such computational problems and apply them to tackle several families of Galois theoretic interest. In particular, in Chapter 5, we compute explicit curve equations for Hurwitz spaces for certain families of \(M_{24}\) and \(M_{23}\).
These are (to my knowledge) the first examples of explicitly computed Hurwitz spaces of such high genus. They might be used to realize \(M_{23}\) as a regular Galois group over Q if one manages to find suitable points on them.
Apart from the calculation of explicit algebraic equations, we produce complex approximations for polynomials with genus zero ramification of several different ramification types in \(M_{24}\) and \(M_{23}\). These may be used as starting points for similar computations.
The main motivation for these computations is the fact that \(M_{23}\) is currently the only remaining sporadic group that is not known to occur as a Galois group over Q.
We also compute the first explicit polynomials with Galois groups \(G=P\Gamma L_3(4), PGL_3(4), PSL_3(4)\) and \(PSL_5(2)\) over Q(t).
Special attention will be given to reality questions. As an application we compute the first examples of totally real polynomials with Galois groups \(PGL_2(11)\) and \(PSL_3(3)\) over Q.
As a suggestion for further research, we describe an explicit algorithmic version of "Algebraic Patching", following the theory described e.g. by M. Jarden. This could be used to conquer some problems regarding families of covers of genus g>0.
Finally, we present explicit Magma implementations for several of the most important algorithms involved in our computations.
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated.
An efficient and accurate computational framework for solving control problems governed by quantum spin systems is presented. Spin systems are extremely important in modern quantum technologies such as nuclear magnetic resonance spectroscopy, quantum imaging and quantum computing. In these applications, two classes of quantum control problems arise: optimal control problems and exact-controllability problems, with a bilinear con- trol structure. These models correspond to the Schrödinger-Pauli equation, describing the time evolution of a spinor, and the Liouville-von Neumann master equation, describing the time evolution of a spinor and a density operator. This thesis focuses on quantum control problems governed by these models. An appropriate definition of the optimiza- tion objectives and of the admissible set of control functions allows to construct controls with specific properties. These properties are in general required by the physics and the technologies involved in quantum control applications. A main purpose of this work is to address non-differentiable quantum control problems. For this reason, a computational framework is developed to address optimal-control prob- lems, with possibly L1 -penalization term in the cost-functional, and exact-controllability problems. In both cases the set of admissible control functions is a subset of a Hilbert space. The bilinear control structure of the quantum model, the L1 -penalization term and the control constraints generate high non-linearities that make difficult to solve and analyse the corresponding control problems. The first part of this thesis focuses on the physical description of the spin of particles and of the magnetic resonance phenomenon. Afterwards, the controlled Schrödinger- Pauli equation and the Liouville-von Neumann master equation are discussed. These equations, like many other controlled quantum models, can be represented by dynamical systems with a bilinear control structure. In the second part of this thesis, theoretical investigations of optimal control problems, with a possible L1 -penalization term in the objective and control constraints, are consid- ered. In particular, existence of solutions, optimality conditions, and regularity properties of the optimal controls are discussed. In order to solve these optimal control problems, semi-smooth Newton methods are developed and proved to be superlinear convergent. The main difficulty in the implementation of a Newton method for optimal control prob- lems comes from the dimension of the Jacobian operator. In a discrete form, the Jacobian is a very large matrix, and this fact makes its construction infeasible from a practical point of view. For this reason, the focus of this work is on inexact Krylov-Newton methods, that combine the Newton method with Krylov iterative solvers for linear systems, and allows to avoid the construction of the discrete Jacobian. In the third part of this thesis, two methodologies for the exact-controllability of quan- tum spin systems are presented. The first method consists of a continuation technique, while the second method is based on a particular reformulation of the exact-control prob- lem. Both these methodologies address minimum L2 -norm exact-controllability problems. In the fourth part, the thesis focuses on the numerical analysis of quantum con- trol problems. In particular, the modified Crank-Nicolson scheme as an adequate time discretization of the Schrödinger equation is discussed, the first-discretize-then-optimize strategy is used to obtain a discrete reduced gradient formula for the differentiable part of the optimization objective, and implementation details and globalization strategies to guarantee an adequate numerical behaviour of semi-smooth Newton methods are treated. In the last part of this work, several numerical experiments are performed to vali- date the theoretical results and demonstrate the ability of the proposed computational framework to solve quantum spin control problems.
In the thesis discrete moments of the Riemann zeta-function and allied Dirichlet series are studied.
In the first part the asymptotic value-distribution of zeta-functions is studied where the samples are taken from a Cauchy random walk on a vertical line inside the critical strip. Building on techniques by Lifshits and Weber analogous results for the Hurwitz zeta-function are derived. Using Atkinson’s dissection this is even generalized to Dirichlet L-functions associated with a primitive character. Both results indicate that the expectation value equals one which shows that the values of these
zeta-function are small on average.
The second part deals with the logarithmic derivative of the Riemann zeta-function on vertical lines and here the samples are with respect to an explicit ergodic transformation. Extending work of Steuding, discrete moments are evaluated and an equivalent formulation for the Riemann Hypothesis in terms of ergodic theory is obtained.
In the third and last part of the thesis, the phenomenon of universality with respect
to stochastic processes is studied. It is shown that certain random shifts of the zeta-function can approximate non-vanishing analytic target functions as good as we please. This result relies on Voronin's universality theorem.
The subject of this thesis is the rigorous passage from discrete systems to continuum models via variational methods.
The first part of this work studies a discrete model describing a one-dimensional chain of atoms with finite range interactions of Lennard-Jones type. We derive an expansion of the ground state energy using \(\Gamma\)-convergence. In particular, we show that a variant of the Cauchy-Born rule holds true for the model under consideration. We exploit this observation to derive boundary layer energies due to asymmetries of the lattice at the boundary or at cracks of the specimen. Hereby we extend several results obtained previously for models involving only nearest and next-to-nearest neighbour interactions by Braides and Cicalese and Scardia, Schlömerkemper and Zanini.
The second part of this thesis is devoted to the analysis of a quasi-continuum (QC) method. To this end, we consider the discrete model studied in the first part of this thesis as the fully atomistic model problem and construct an approximation based on a QC method. We show that in an elastic setting the expansion by \(\Gamma\)-convergence of the fully atomistic energy and its QC approximation coincide. In the case of fracture, we show that this is not true in general. In the case of only nearest and next-to-nearest neighbour interactions, we give sufficient conditions on the QC approximation such that, also in case of fracture, the minimal energies of the fully atomistic energy and its approximation coincide in the limit.
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
Der Einzug des Rechners in den Mathematikunterricht hat eine Vielzahl neuer Möglichkeiten der Darstellung mit sich gebracht, darunter auch multiple, dynamisch verbundene Repräsentationen mathematischer Probleme. Die Arbeit beantwortet die Frage, ob und wie diese Repräsentationsarten von Schülerinnen und Schüler in Argumentationen genutzt werden. In der empirischen Untersuchung wurde dabei einerseits quantitativ erforscht, wie groß der Einfluss der in der Aufgabenstellung gegebenen Repräsentationsform auf die schriftliche Argumentationen der Schülerinnen und Schüler ist. Andererseits wurden durch eine qualitative Analyse spezifische Nutzungsweisen identifiziert und mittels Toulmins Argumentationsmodell beschrieben. Diese Erkenntnisse wurden genutzt, um Konsequenzen bezüglich der Verwendung von multiplen und/oder dynamischen Repräsentationen im Mathematikunterricht der Sekundarstufe zu formulieren.
Analysis of discretization schemes for Fokker-Planck equations and related optimality systems
(2015)
The Fokker-Planck (FP) equation is a fundamental model in thermodynamic kinetic theories and
statistical mechanics.
In general, the FP equation appears in a number of different fields in natural sciences, for instance in solid-state physics, quantum optics, chemical physics, theoretical biology, and circuit theory. These equations also provide a powerful mean to define
robust control strategies for random models. The FP equations are partial differential equations (PDE) describing the time evolution of the probability density function (PDF) of stochastic processes.
These equations are of different types depending on the underlying stochastic process.
In particular, they are parabolic PDEs for the PDF of Ito processes, and hyperbolic PDEs for piecewise deterministic processes (PDP).
A fundamental axiom of probability calculus requires that the integral of the PDF over all the allowable state space must be equal to one, for all time. Therefore, for the purpose of accurate numerical simulation, a discretized FP equation must guarantee conservativeness of the total probability. Furthermore, since the
solution of the FP equation represents a probability density, any numerical scheme that approximates the FP equation is required to guarantee the positivity of the solution. In addition, an approximation scheme must be accurate and stable.
For these purposes, for parabolic FP equations on bounded domains, we investigate the Chang-Cooper (CC) scheme for space discretization and first- and
second-order backward time differencing. We prove that the resulting
space-time discretization schemes are accurate, conditionally stable, conservative, and preserve positivity.
Further, we discuss a finite difference discretization for the FP system corresponding to a PDP process in a bounded domain.
Next, we discuss FP equations in unbounded domains.
In this case, finite-difference or finite-element methods cannot be applied. By employing a suitable set of basis functions, spectral methods allow to treat unbounded domains. Since FP solutions decay exponentially at infinity, we consider Hermite functions as basis functions, which are Hermite polynomials multiplied by a Gaussian.
To this end, the Hermite spectral discretization is applied
to two different FP equations; the parabolic PDE corresponding to Ito processes, and the system of hyperbolic PDEs corresponding to a PDP process. The resulting discretized schemes are analyzed. Stability and spectral accuracy of the Hermite spectral discretization of the FP problems is proved. Furthermore, we investigate the conservativity of the solutions of FP equations discretized with the Hermite spectral scheme.
In the last part of this thesis, we discuss optimal control problems governed by FP equations on the characterization of their solution by optimality systems. We then investigate the Hermite spectral discretization of FP optimality systems in unbounded domains.
Within the framework of Hermite discretization, we obtain sparse-band systems of ordinary differential equations. We analyze the accuracy of the discretization schemes by showing spectral convergence in approximating the state, the adjoint, and the control variables that appear in the FP optimality systems.
To validate our theoretical estimates, we present results of numerical experiments.
The investigation of interacting multi-agent models is a new field of mathematical research with application to the study of behavior in groups of animals or community of people. One interesting feature of multi-agent systems is collective behavior. From the mathematical point of view, one of the challenging issues considering with these dynamical models is development of control mechanisms that are able to influence the time evolution of these systems.
In this thesis, we focus on the study of controllability, stabilization and optimal control problems for multi-agent systems considering three models as follows: The first one is the Hegselmann Krause opinion formation (HK) model. The HK dynamics describes how individuals' opinions are changed by the interaction with others taking place in a bounded domain of confidence. The study of this model focuses on determining feedback controls in order to drive the agents' opinions to reach a desired agreement. The second model is the Heider social balance (HB) model. The HB dynamics explains the evolution of relationships in a social network. One purpose of studying this system is the construction of control function in oder to steer the relationship to reach a friendship state. The third model that we discuss is a flocking model describing collective motion observed in biological systems. The flocking model under consideration includes self-propelling, friction, attraction, repulsion, and alignment features. We investigate a control for steering the flocking system to track a desired trajectory. Common to all these systems is our strategy to add a leader agent that interacts with all other members of the system and includes the control mechanism.
Our control through leadership approach is developed using classical theoretical control methods and a model predictive control (MPC) scheme. To apply the former method, for each model the stability of the corresponding linearized system near consensus is investigated. Further, local controllability is examined. However, only in the
Hegselmann-Krause opinion formation model, the feedback control is determined in order to steer agents' opinions to globally converge to a desired agreement. The MPC approach is an optimal control strategy based on numerical optimization. To apply the MPC scheme, optimal control problems for each model are formulated where the objective functions are different depending on the desired objective of the problem. The first-oder necessary optimality conditions for each problem are presented. Moreover for the numerical treatment, a sequence of open-loop discrete optimality systems is solved by accurate Runge-Kutta schemes, and in the optimization procedure, a nonlinear conjugate gradient solver is implemented. Finally, numerical experiments are performed to investigate the properties of the multi-agent models and demonstrate the ability of the proposed control strategies to drive multi-agent systems to attain a desired consensus and to track a given trajectory.
The goal of this thesis is to investigate conformal mappings onto circular arc polygon domains, i.e. domains that are bounded by polygons consisting of circular arcs instead of line segments.
Conformal mappings onto circular arc polygon domains contain parameters in addition to the classical parameters of the Schwarz-Christoffel transformation. To contribute to the parameter problem of conformal mappings from the unit disk onto circular arc polygon domains, we investigate two special cases of these mappings. In the first case we can describe the additional parameters if the bounding circular arc polygon is a polygon with straight sides. In the second case we provide an approximation for the additional parameters if the circular arc polygon domain satisfies some symmetry conditions. These results allow us to draw conclusions on the connection between these additional parameters and the classical parameters of the mapping.
For conformal mappings onto multiply connected circular arc polygon domains, we provide an alternative construction of the mapping formula without using the Schottky-Klein prime function. In the process of constructing our main result, mappings for domains of connectivity three or greater, we also provide a formula for conformal mappings onto doubly connected circular arc polygon domains. The comparison of these mapping formulas with already known mappings allows us to provide values for some of the parameters of the mappings onto doubly connected circular arc polygon domains if the image domain is a polygonal domain.
The different components of the mapping formula are constructed by using a slightly modified variant of the Poincaré theta series. This construction includes the design of a function to remove unwanted poles and of different versions of functions that are analytic on the domain of definition of the mapping functions and satisfy some special functional equations.
We also provide the necessary concepts to numerically evaluate the conformal mappings onto multiply connected circular arc polygon domains. As the evaluation of such a map requires the solution of a differential equation, we provide a possible configuration of curves inside the preimage domain to solve the equation along them in addition to a description of the procedure for computing either the formula for the doubly connected case or the case of connectivity three or greater. We also describe the procedures for solving the parameter problem for multiply connected circular arc polygon domains.
The thesis focuses on the valuation of firms in a system context where cross-holdings of the firms in liabilities and equities are allowed and, therefore, systemic risk can be modeled on a structural level. A main property of such models is that for the determination of the firm values a pricing equilibrium has to be found. While there exists a small but growing amount of research on the existence and the uniqueness of such price equilibria, the literature is still somewhat inconsistent. An example for this fact is that different authors define the underlying financial system on differing ways. Moreover, only few articles pay intense attention on procedures to find the pricing equilibria. In the existing publications, the provided algorithms mainly reflect the individual authors' particular approach to the problem. Additionally, all existing methods do have the drawback of potentially infinite runtime.
For these reasons, the objects of this thesis are as follows. First, a definition of a financial system is introduced in its most general form in Chapter 2. It is shown that under a fairly mild regularity condition the financial system has a unique existing payment equilibrium. In Chapter 3, some extensions and differing definitions of financial systems that exist in literature are presented and it is shown how these models can be embedded into the general model from the proceeding chapter. Second, an overview of existing valuation algorithms to find the equilibrium is given in Chapter 4, where the existing methods are generalized and their corresponding mathematical properties are highlighted. Third, a complete new class of valuation algorithms is developed in Chapter 4 that includes the additional information whether a firm is in default or solvent under a current payment vector. This results in procedures that are able find the solution of the system in a finite number of iteration steps. In Chapter 5, the developed concepts of Chapter 4 are applied to more general financial systems where more than one seniority level of debt is present. Chapter 6 develops optimal starting vectors for non-finite algorithms and Chapter 7 compares the existing and the new developed algorithms concerning their efficiency in an extensive simulation study covering a wide range of possible settings for financial systems.