Refine
Has Fulltext
- yes (230)
Is part of the Bibliography
- yes (230)
Year of publication
Document Type
- Doctoral Thesis (131)
- Journal article (77)
- Book (5)
- Other (4)
- Report (4)
- Master Thesis (3)
- Conference Proceeding (2)
- Preprint (2)
- Book article / Book chapter (1)
- Review (1)
Keywords
- Optimale Kontrolle (12)
- Optimierung (9)
- Extremwertstatistik (8)
- optimal control (8)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (7)
- Mathematik (6)
- Nichtlineare Optimierung (6)
- Finite-Elemente-Methode (5)
- Mathematikunterricht (5)
Institute
- Institut für Mathematik (230) (remove)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
The dissertation investigates the wide class of Epstein zeta-functions in terms of uniform distribution modulo one of the ordinates of their nontrivial zeros. Main results are a proof of a Landau type theorem for all Epstein zeta-functions as well as uniform distribution modulo one for the zero ordinates of all Epstein zeta-functions asscoiated with binary quadratic forms.
In financial mathematics, it is a typical approach to approximate financial markets operating in discrete time by continuous-time models such as the Black Scholes model. Fitting this model gives rise to difficulties due to the discrete nature of market data. We thus model the pricing process of financial derivatives by the Black Scholes equation, where the volatility is a function of a finite number of random variables. This reflects an influence of uncertain factors when determining volatility. The aim is to quantify the effect of this uncertainty when computing the price of derivatives. Our underlying method is the generalized Polynomial Chaos (gPC) method in order to numerically compute the uncertainty of the solution by the stochastic Galerkin approach and a finite difference method. We present an efficient numerical variation of this method, which is based on a machine learning technique, the so-called Bi-Fidelity approach. This is illustrated with numerical examples.
Die vorliegende Arbeit beschäftigt sich explorativ mit Metakognition beim Umgang mit Mathematik. Aufbauend auf der vorgestellten Forschungsliteratur wird der Einsatz von Metakognition im Rahmen einer qualitativen Studie bei Studienanfänger_innen aus verschiedenen Mathematik-(Lehramts-)Studiengängen dokumentiert. Unter Verwendung der Qualitativen Inhaltsanalyse nach Mayring erfolgt die Etablierung eines Kategoriensystems für den Begriff Metakognition im Hinblick auf den Einsatz in der Mathematik, das bisherige Systematisierungen erweitert. Schließlich wird der Einsatz der entsprechenden metakognitiven Aspekte am Beispiel verschiedener Begriffe und Verfahren aus dem Analysis-Unterricht exemplarisch aufgezeigt.
In dieser Arbeit wird mathematisches Papierfalten und speziell 1-fach-Origami im universitären Kontext untersucht. Die Arbeit besteht aus drei Teilen.
Der erste Teil ist im Wesentlichen der Sachanalyse des 1-fach-Origami gewidmet. Im ersten Kapitel gehen wir auf die geschichtliche Einordnung des 1-fach-Origami, betrachten axiomatische Grundlagen und diskutieren, wie das Axiomatisieren von 1-fach-Origami zum Verständnis des Axiomenbegriffs beitragen könnte. Im zweiten Kapitel schildern wir das Design der zugehörigen explorativen Studie, beschreiben unsere Forschungsziele und -fragen. Im dritten Kapitel wird 1-fach-Origami mathematisiert, definiert und eingehend untersucht.
Der zweite Teil beschäftigt sich mit den von uns gestalteten und durchgeführten Kursen »Axiomatisieren lernen mit Papierfalten«. Im vierten Kapitel beschreiben wir die Lehrmethodik und die Gestaltung der Kurse, das fünfte Kapitel enthält ein Exzerpt der Kurse.
Im dritten Teil werden die zugehörigen Tests beschrieben. Im sechsten Kapitel erläutern wir das Design der Tests sowie die Testmethodik. Im siebten Kapitel findet die Auswertung ebendieser Tests statt.
We construct a foliation of an asymptotically flat end of a Riemannian manifold by hypersurfaces which are critical points of a natural functional arising in potential theory. These hypersurfaces are perturbations of large coordinate spheres, and they admit solutions of a certain over-determined boundary value problem involving the Laplace–Beltrami operator. In a key step we must invert the Dirichlet-to-Neumann operator, highlighting the nonlocal nature of our problem.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
A sequentialquadratic Hamiltonian schemefor solving open-loop differential Nash games is proposed and investigated. This method is formulated in the framework of the Pontryagin maximum principle and represents an efficient and robust extension of the successive approximations strategy for solving optimal control problems. Theoretical results are presented that prove the well-posedness of the proposed scheme, and results of numerical experiments are reported that successfully validate its computational performance.
We prove a sharp Bernstein-type inequality for complex polynomials which are positive and satisfy a polynomial growth condition on the positive real axis. This leads to an improved upper estimate in the recent work of Culiuc and Treil (Int. Math. Res. Not. 2019: 3301–3312, 2019) on the weighted martingale Carleson embedding theorem with matrix weights. In the scalar case this new upper bound is optimal.
Nowadays, science, technology, engineering, and mathematics (STEM) play a critical role in a nation’s global competitiveness and prosperity. Thus, there is a need to educate students in these subjects to meet the current and future demands of personal life and society. While applications, especially in science, engineering, and technology, are directly obvious, mathematics underpins the other STEM disciplines. It is recognized that mathematics is the foundation for all other STEM disciplines; the role of mathematics in classrooms is not clear yet. Therefore, the question arises: What is the current role of mathematics in secondary STEM classrooms? To answer this question, we conducted a systematic literature review based on three publication databases (Web of Science, ERIC, and EBSCO Teacher Referral Center). This literature review paper is intended to contribute to the current state of the role of mathematics in STEM education in secondary classrooms. Through the search, starting with 1910 documents, only 14 eligible documents were found. In these, mathematics is often seen as a minor matter and a means to an end in the eyes of science educators. From this, we conclude that the role of mathematics in the STEM classroom should be further strengthened. Overall, the paper highlights a major research gap, and proposes possible initial solutions to close it.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
Composite optimization problems, where the sum of a smooth and a merely lower semicontinuous function has to be minimized, are often tackled numerically by means of proximal gradient methods as soon as the lower semicontinuous part of the objective function is of simple enough structure. The available convergence theory associated with these methods (mostly) requires the derivative of the smooth part of the objective function to be (globally) Lipschitz continuous, and this might be a restrictive assumption in some practically relevant scenarios. In this paper, we readdress this classical topic and provide convergence results for the classical (monotone) proximal gradient method and one of its nonmonotone extensions which are applicable in the absence of (strong) Lipschitz assumptions. This is possible since, for the price of forgoing convergence rates, we omit the use of descent-type lemmas in our analysis.
Let (ϕ\(_t\))\(_{t≥0}\) be a semigroup of holomorphic functions in the unit disk \(\mathbb {D}\) and K a compact subset of \(\mathbb {D}\). We investigate the conditions under which the backward orbit of K under the semigroup exists. Subsequently, the geometric characteristics, as well as, potential theoretic quantities for the backward orbit of K are examined. More specifically, results are obtained concerning the asymptotic behavior of its hyperbolic area and diameter, the harmonic measure and the capacity of the condenser that K forms with the unit disk.
Coisotropic algebras consist of triples of algebras for which a reduction can be defined and unify in a very algebraic fashion coisotropic reduction in several settings. In this paper, we study the theory of (formal) deformation of coisotropic algebras showing that deformations are governed by suitable coisotropic DGLAs. We define a deformation functor and prove that it commutes with reduction. Finally, we study the obstructions to existence and uniqueness of coisotropic algebras and present some geometric examples.
The article deals with the pedagogical content knowledge of mathematical modelling as part of the professional competence of pre-service teachers. With the help of a test developed for this purpose from a conceptual model, we examine whether this pedagogical content knowledge can be promoted in its different facets—especially knowledge about modelling tasks and about interventions—by suitable university seminars. For this purpose, the test was administered to three groups in a seminar for the teaching of mathematical modelling: (1) to those respondents who created their own modelling tasks for use with students, (2) to those trained to intervene in mathematical modelling processes, and (3) participating students who are not required to address mathematical modelling. The findings of the study—based on variance analysis—indicate that certain facets (knowledge of modelling tasks, modelling processes, and interventions) have increased significantly in both experimental groups but to varying degrees. By contrast, pre-service teachers in the control group demonstrated no significant change to their level of pedagogical content knowledge.
To study coisotropic reduction in the context of deformation quantization we introduce constraint manifolds and constraint algebras as the basic objects encoding the additional information needed to define a reduction. General properties of various categories of constraint objects and their compatiblity with reduction are examined. A constraint Serre-Swan theorem, identifying constraint vector bundles with certain finitely generated projective constraint modules, as well as a constraint symbol calculus are proved. After developing the general deformation theory of constraint algebras, including constraint Hochschild cohomology and constraint differential graded Lie algebras, the second constraint Hochschild cohomology for the constraint algebra of functions on a constraint flat space is computed.
We generalize a theorem by Titchmarsh about the mean value of Hardy’s \(Z\)-function at the Gram points to the Hecke \(L\)-functions, which in turn implies the weak Gram law for them. Instead of proceeding analogously to Titchmarsh with an approximate functional equation we employ a different method using contour integration.
For a graph \(\Gamma\) , let K be the smallest field containing all eigenvalues of the adjacency matrix of \(\Gamma\) . The algebraic degree \(\deg (\Gamma )\) is the extension degree \([K:\mathbb {Q}]\). In this paper, we completely determine the algebraic degrees of Cayley graphs over abelian groups and dihedral groups.
The concept of derivative is characterised with reference to four basic mental models. These are described as theoretical constructs based on theoretical considerations. The four basic mental models—local rate of change, tangent slope, local linearity and amplification factor—are not only quantified empirically but are also validated. To this end, a test instrument for measuring students’ characteristics of basic mental models is presented and analysed regarding quality criteria.
Mathematics students (n = 266) were tested with this instrument. The test results show that the four basic mental models of the derivative can be reconstructed among the students with different characteristics. The tangent slope has the highest agreement values across all tasks. The agreement on explanations based on the basic mental model of rate of change is not as strongly established among students as one would expect due to framework settings in the school system by means of curricula and educational standards. The basic mental model of local linearity plays a rather subordinate role. The amplification factor achieves the lowest agreement values. In addition, cluster analysis was conducted to identify different subgroups of the student population. Moreover, the test results can be attributed to characteristics of the task types as well as to the students’ previous experiences from mathematics classes by means of qualitative interpretation. These and other results of students’ basic mental models of the derivative are presented and discussed in detail.
Mathematical concepts are regularly used in media reports concerning the Covid-19 pandemic. These include growth models, which attempt to explain or predict the effectiveness of interventions and developments, as well as the reproductive factor. Our contribution has the aim of showing that basic mental models about exponential growth are important for understanding media reports of Covid-19. Furthermore, we highlight how the coronavirus pandemic can be used as a context in mathematics classrooms to help students understand that they can and should question media reports on their own, using their mathematical knowledge. Therefore, we first present the role of mathematical modelling in achieving these goals in general. The same relevance applies to the necessary basic mental models of exponential growth. Following this description, based on three topics, namely, investigating the type of growth, questioning given course models, and determining exponential factors at different times, we show how the presented theoretical aspects manifest themselves in teaching examples when students are given the task of reflecting critically on existing media reports. Finally, the value of the three topics regarding the intended goals is discussed and conclusions concerning the possibilities and limits of their use in schools are drawn.
We extend Bourgain’s bound for the order of growth of the Riemann zeta function on the critical line to Lerch zeta functions. More precisely, we prove L(λ, α, 1/2 + it) ≪ t\(^{13/84+ϵ}\) as t → ∞. For both, the Riemann zeta function as well as for the more general Lerch zeta function, it is conjectured that the right-hand side can be replaced by t\(^ϵ\) (which is the so-called Lindelöf hypothesis). The growth of an analytic function is closely related to the distribution of its zeros.
For a connected real Lie group G we consider the canonical standard-ordered star product arising from the canonical global symbol calculus based on the half-commutator connection of G. This star product trivially converges on polynomial functions on T\(^*\)G thanks to its homogeneity. We define a nuclear Fréchet algebra of certain analytic functions on T\(^*\)G, for which the standard-ordered star product is shown to be a well-defined continuous multiplication, depending holomorphically on the deformation parameter \(\hbar\). This nuclear Fréchet algebra is realized as the completed (projective) tensor product of a nuclear Fréchet algebra of entire functions on G with an appropriate nuclear Fréchet algebra of functions on \({\mathfrak {g}}^*\). The passage to the Weyl-ordered star product, i.e. the Gutt star product on T\(^*\)G, is shown to preserve this function space, yielding the continuity of the Gutt star product with holomorphic dependence on \(\hbar\).
This paper studies differential graded modules and representations up to homotopy of Lie n-algebroids, for general \(n\in {\mathbb {N}}\). The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy are explained. In particular, the case of Lie 2-algebroids is analysed in detail. The compatibility of a Poisson bracket with the homological vector field of a Lie n-algebroid is shown to be equivalent to a morphism from the coadjoint module to the adjoint module, leading to an alternative characterisation of non-degeneracy of higher Poisson structures. Moreover, the Weil algebra of a Lie n-algebroid is computed explicitly in terms of splittings, and representations up to homotopy of Lie n-algebroids are used to encode decomposed VB-Lie n-algebroid structures on double vector bundles.
Providing adaptive, independence-preserving and theory-guided support to students in dealing with real-world problems in mathematics lessons is a major challenge for teachers in their professional practice. This paper examines this challenge in the context of simulations and mathematical modelling with digital tools: in addition to mathematical difficulties when autonomously working out individual solutions, students may also experience challenges when using digital tools. These challenges need to be closely examined and diagnosed, and might – if necessary – have to be overcome by intervention in such a way that the students can subsequently continue working independently. Thus, if a difficulty arises in the working process, two knowledge dimensions are necessary in order to provide adapted support to students. For teaching simulations and mathematical modelling with digital tools, more specifically, these knowledge dimensions are: pedagogical content knowledge about simulation and modelling processes supported by digital tools (this includes knowledge about phases and difficulties in the working process) and pedagogical content knowledge about interventions during the mentioned processes (focussing on characteristics of suitable interventions as well as their implementation and effects on the students’ working process). The two knowledge dimensions represent cognitive dispositions as the basis for the conceptualisation and operationalisation of a so-called adaptive intervention competence for teaching simulations and mathematical modelling with digital tools. In our article, we present a domain-specific process model and distinguish different types of teacher interventions. Then we describe the design and content of a university course at two German universities aiming to promote this domain-specific professional adaptive intervention competence, among others. In a study using a quasi-experimental pre-post design (N = 146), we confirm that the structure of cognitive dispositions of adaptive intervention competence for teaching simulations and mathematical modelling with digital tools can be described empirically by a two-dimensional model. In addition, the effectiveness of the course is examined and confirmed quantitatively. Finally, the results are discussed, especially against the background of the sample and the research design, and conclusions are derived for possibilities of promoting professional adaptive intervention competence in university courses.
This thesis, first, is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints, subsequently, as well as constrained structured optimization problems featuring a composite objective function and set-membership constraints. It is then concerned to convergence and rate-of-convergence analysis of proximal gradient methods for the composite optimization problems in the presence of the Kurdyka--{\L}ojasiewicz property without global Lipschitz assumption.
The goal of this thesis is to study the topological and algebraic properties of the quasiconformal automorphism groups of simply and multiply connected domains in the complex plain, in which the quasiconformal automorphism groups are endowed with the supremum metric on the underlying domain. More precisely, questions concerning central topological properties such as (local) compactness, (path)-connectedness and separability and their dependence on the boundary of the corresponding domains are studied, as well as completeness with respect to the supremum metric. Moreover, special subsets of the quasiconformal automorphism group of the unit disk are investigated, and concrete quasiconformal automorphisms are constructed. Finally, a possible application of quasiconformal unit disk automorphisms to symmetric cryptography is presented, in which a quasiconformal cryptosystem is defined and studied.
We analyze the mathematical models of two classes of physical phenomena. The first class of phenomena we consider is the interaction between one or more insulating rigid bodies and an electrically conducting fluid, inside of which the bodies are contained, as well as the electromagnetic fields trespassing both of the materials. We take into account both the cases of incompressible and compressible fluids. In both cases our main result yields the existence of weak solutions to the associated system of partial differential equations, respectively. The proofs of these results are built upon hybrid discrete-continuous approximation schemes: Parts of the systems are discretized with respect to time in order to deal with the solution-dependent test functions in the induction equation. The remaining parts are treated as continuous equations on the small intervals between consecutive discrete time points, allowing us to employ techniques which do not transfer to the discretized setting. Moreover, the solution-dependent test functions in the momentum equation are handled via the use of classical penalization methods.
The second class of phenomena we consider is the evolution of a magnetoelastic material. Here too, our main result proves the existence of weak solutions to the corresponding system of partial differential equations. Its proof is based on De Giorgi's minimizing movements method, in which the system is discretized in time and, at each discrete time point, a minimization problem is solved, the associated Euler-Lagrange equations of which constitute a suitable approximation of the original equation of motion and magnetic force balance. The construction of such a minimization problem is made possible by the realization that, already on the continuous level, both of these equations can be written in terms of the same energy and dissipation potentials. The functional for the discrete minimization problem can then be constructed on the basis of these potentials.
In this thesis, a variety of Fokker--Planck (FP) optimal control problems are investigated. Main emphasis is put on a first-- and second--order analysis of different optimal control problems, characterizing optimal controls, establishing regularity results for optimal controls, and providing a numerical analysis for a Galerkin--based numerical scheme.
The Fokker--Planck equation is a partial differential equation (PDE) of linear parabolic type deeply connected to the theory of stochastic processes and stochastic differential equations. In essence, it describes the evolution over time of the probability distribution of the state of an object or system of objects under the influence of both deterministic and stochastic forces.
The FP equation is a cornerstone in understanding and modeling phenomena ranging from the diffusion and motion of molecules in a fluid to the fluctuations in financial markets.
Two different types of optimal control problems are analyzed in this thesis. On the one hand, Fokker--Planck ensemble optimal control problems are considered that have a wide range of applications in controlling a system of multiple non--interacting objects. In this framework, the goal is to collectively drive each object into a desired state.
On the other hand, tracking--type control problems are investigated, commonly used in parameter identification problems or stemming from the field of inverse problems.
In this framework, the aim is to determine certain parameters or functions of the FP equation, such that the resulting probability distribution function takes a desired form, possibly observed by measurements.
In both cases, we consider FP models where the control functions are part of the drift, arising only from the deterministic forces of the system. Therefore, the FP optimal control problem has a bilinear control structure.
Box constraints on the controls may be present, and the focus is on time--space dependent controls for ensemble--type problems and on only time--dependent controls for tracking--type optimal control problems.
In the first chapter of the thesis, a proof of the connection between the FP equation and stochastic differential equations is provided. Additionally, stochastic optimal control problems, aiming to minimize an expected cost value, are introduced, and the corresponding formulation within a deterministic FP control framework is established.
For the analysis of this PDE--constrained optimal control problem, the existence, and regularity of solutions to the FP problem are investigated. New $L^\infty$--estimates for solutions are established for low space dimensions under mild assumptions on the drift. Furthermore, based on the theory of Bessel potential spaces, new smoothness properties are derived for solutions to the FP problem in the case of only time--dependent controls. Due to these properties, the control--to--state map, which associates the control functions with the corresponding solution of the FP problem, is well--defined, Fréchet differentiable and compact for suitable Lebesgue spaces or Sobolev spaces.
The existence of optimal controls is proven under various assumptions on the space of admissible controls and objective functionals. First--order optimality conditions are derived using the adjoint system. The resulting characterization of optimal controls is exploited to achieve higher regularity of optimal controls, as well as their state and co--state functions.
Since the FP optimal control problem is non--convex due to its bilinear structure, a first--order analysis should be complemented by a second--order analysis.
Therefore, a second--order analysis for the ensemble--type control problem in the case of $H^1$--controls in time and space is performed, and sufficient second--order conditions are provided. Analogous results are obtained for the tracking--type problem for only time--dependent controls.
The developed theory on the control problem and the first-- and second--order optimality conditions is applied to perform a numerical analysis for a Galerkin discretization of the FP optimal control problem. The main focus is on tracking-type problems with only time--dependent controls. The idea of the presented Galerkin scheme is to first approximate the PDE--constrained optimization problem by a system of ODE--constrained optimization problems. Then, conditions on the problem are presented such that the convergence of optimal controls from one problem to the other can be guaranteed.
For this purpose, a class of bilinear ODE--constrained optimal control problems arising from the Galerkin discretization of the FP problem is analyzed. First-- and second--order optimality conditions are established, and a numerical analysis is performed. A discretization with linear finite elements for the state and co--state problem is investigated, while the control functions are approximated by piecewise constant or piecewise quadratic continuous polynomials. The latter choice is motivated by the bilinear structure of the optimal control problem, allowing to overcome the discrepancies between a discretize--then--optimize and optimize--then--discretize approach. Moreover, second--order accuracy results are shown using the space of continuous, piecewise quadratic polynomials as the discrete space of controls. Lastly, the theoretical results and the second--order convergence rates are numerically verified.
The focus of this thesis is on analysing a linear stochastic partial differential equation (SPDE) with a bounded domain. The first part of the thesis commences with an examination of a one-dimensional SPDE. In this context, we construct estimators for the parameters of a parabolic SPDE based on discrete observations of a solution in time and space on a bounded domain. We establish central limit theorems for a high-frequency asymptotic regime, showing substantially smaller asymptotic variances compared to existing estimation methods. Moreover, asymptotic confidence intervals are directly feasible. Our approach builds upon realized volatilities and their asymptotic illustration as the response of a log-linear model with a spatial explanatory variable. This yields efficient estimators based on realized volatilities with optimal rates of convergence and minimal variances. We demonstrate our results by Monte Carlo simulations.
Extending this framework, we analyse a second-order SPDE model in multiple space dimensions in the second part of this thesis and develop estimators for the parameters of this model based on discrete observations in time and space on a bounded domain. While parameter estimation for one and two spatial dimensions was established in recent literature, this is the first work that generalizes the theory to a general, multi-dimensional framework. Our methodology enables the construction of an oracle estimator for volatility within the underlying model. For proving central limit theorems, we use a high-frequency observation scheme. To showcase our results, we conduct a Monte Carlo simulation, highlighting the advantages of our novel approach in a multi-dimensional context.
Physical regimes characterized by low Mach numbers and steep stratifications pose severe challenges to standard finite volume methods. We present three new methods specifically designed to navigate these challenges by being both low Mach compliant and well-balanced. These properties are crucial for numerical methods to efficiently and accurately compute solutions in the regimes considered.
First, we concentrate on the construction of an approximate Riemann solver within Godunov-type finite volume methods. A new relaxation system gives rise to a two-speed relaxation solver for the Euler equations with gravity. Derived from fundamental mathematical principles, this solver reduces the artificial dissipation in the subsonic regime and preserves hydrostatic equilibria. The solver is particularly stable as it satisfies a discrete entropy inequality, preserves positivity of density and internal energy, and suppresses checkerboard modes.
The second scheme is designed to solve the equations of ideal MHD and combines different approaches. In order to deal with low Mach numbers, it makes use of a low-dissipation version of the HLLD solver and a partially implicit time discretization to relax the CFL time step constraint. A Deviation Well-Balancing method is employed to preserve a priori known magnetohydrostatic equilibria and thereby reduces the magnitude of spatial discretization errors in strongly stratified setups.
The third scheme relies on an IMEX approach based on a splitting of the MHD equations. The slow scale part of the system is discretized by a time-explicit Godunov-type method, whereas the fast scale part is discretized implicitly by central finite differences. Numerical dissipation terms and CFL time step restriction of the method depend solely on the slow waves of the explicit part, making the method particularly suited for subsonic regimes. Deviation Well-Balancing ensures the preservation of a priori known magnetohydrostatic equilibria.
The three schemes are applied to various numerical experiments for the compressible Euler and ideal MHD equations, demonstrating their ability to accurately simulate flows in regimes with low Mach numbers and strong stratification even on coarse grids.