Refine
Has Fulltext
- yes (77)
Is part of the Bibliography
- yes (77)
Year of publication
Document Type
- Journal article (77) (remove)
Keywords
- Gaussian approximation (3)
- Mathematik (3)
- count time series (3)
- mathematical modelling (3)
- nonsmooth optimization (3)
- optimal control (3)
- Anaphylaxis (2)
- BGK approximation (2)
- Fokker–Planck equation (2)
- Grundvorstellung (2)
Institute
- Institut für Mathematik (77) (remove)
EU-Project number / Contract (GA) number
- 304617 (2)
We generalize a theorem by Titchmarsh about the mean value of Hardy’s \(Z\)-function at the Gram points to the Hecke \(L\)-functions, which in turn implies the weak Gram law for them. Instead of proceeding analogously to Titchmarsh with an approximate functional equation we employ a different method using contour integration.
For a graph \(\Gamma\) , let K be the smallest field containing all eigenvalues of the adjacency matrix of \(\Gamma\) . The algebraic degree \(\deg (\Gamma )\) is the extension degree \([K:\mathbb {Q}]\). In this paper, we completely determine the algebraic degrees of Cayley graphs over abelian groups and dihedral groups.
The concept of derivative is characterised with reference to four basic mental models. These are described as theoretical constructs based on theoretical considerations. The four basic mental models—local rate of change, tangent slope, local linearity and amplification factor—are not only quantified empirically but are also validated. To this end, a test instrument for measuring students’ characteristics of basic mental models is presented and analysed regarding quality criteria.
Mathematics students (n = 266) were tested with this instrument. The test results show that the four basic mental models of the derivative can be reconstructed among the students with different characteristics. The tangent slope has the highest agreement values across all tasks. The agreement on explanations based on the basic mental model of rate of change is not as strongly established among students as one would expect due to framework settings in the school system by means of curricula and educational standards. The basic mental model of local linearity plays a rather subordinate role. The amplification factor achieves the lowest agreement values. In addition, cluster analysis was conducted to identify different subgroups of the student population. Moreover, the test results can be attributed to characteristics of the task types as well as to the students’ previous experiences from mathematics classes by means of qualitative interpretation. These and other results of students’ basic mental models of the derivative are presented and discussed in detail.
Mathematical concepts are regularly used in media reports concerning the Covid-19 pandemic. These include growth models, which attempt to explain or predict the effectiveness of interventions and developments, as well as the reproductive factor. Our contribution has the aim of showing that basic mental models about exponential growth are important for understanding media reports of Covid-19. Furthermore, we highlight how the coronavirus pandemic can be used as a context in mathematics classrooms to help students understand that they can and should question media reports on their own, using their mathematical knowledge. Therefore, we first present the role of mathematical modelling in achieving these goals in general. The same relevance applies to the necessary basic mental models of exponential growth. Following this description, based on three topics, namely, investigating the type of growth, questioning given course models, and determining exponential factors at different times, we show how the presented theoretical aspects manifest themselves in teaching examples when students are given the task of reflecting critically on existing media reports. Finally, the value of the three topics regarding the intended goals is discussed and conclusions concerning the possibilities and limits of their use in schools are drawn.
We extend Bourgain’s bound for the order of growth of the Riemann zeta function on the critical line to Lerch zeta functions. More precisely, we prove L(λ, α, 1/2 + it) ≪ t\(^{13/84+ϵ}\) as t → ∞. For both, the Riemann zeta function as well as for the more general Lerch zeta function, it is conjectured that the right-hand side can be replaced by t\(^ϵ\) (which is the so-called Lindelöf hypothesis). The growth of an analytic function is closely related to the distribution of its zeros.
For a connected real Lie group G we consider the canonical standard-ordered star product arising from the canonical global symbol calculus based on the half-commutator connection of G. This star product trivially converges on polynomial functions on T\(^*\)G thanks to its homogeneity. We define a nuclear Fréchet algebra of certain analytic functions on T\(^*\)G, for which the standard-ordered star product is shown to be a well-defined continuous multiplication, depending holomorphically on the deformation parameter \(\hbar\). This nuclear Fréchet algebra is realized as the completed (projective) tensor product of a nuclear Fréchet algebra of entire functions on G with an appropriate nuclear Fréchet algebra of functions on \({\mathfrak {g}}^*\). The passage to the Weyl-ordered star product, i.e. the Gutt star product on T\(^*\)G, is shown to preserve this function space, yielding the continuity of the Gutt star product with holomorphic dependence on \(\hbar\).
This paper studies differential graded modules and representations up to homotopy of Lie n-algebroids, for general \(n\in {\mathbb {N}}\). The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy are explained. In particular, the case of Lie 2-algebroids is analysed in detail. The compatibility of a Poisson bracket with the homological vector field of a Lie n-algebroid is shown to be equivalent to a morphism from the coadjoint module to the adjoint module, leading to an alternative characterisation of non-degeneracy of higher Poisson structures. Moreover, the Weil algebra of a Lie n-algebroid is computed explicitly in terms of splittings, and representations up to homotopy of Lie n-algebroids are used to encode decomposed VB-Lie n-algebroid structures on double vector bundles.
Providing adaptive, independence-preserving and theory-guided support to students in dealing with real-world problems in mathematics lessons is a major challenge for teachers in their professional practice. This paper examines this challenge in the context of simulations and mathematical modelling with digital tools: in addition to mathematical difficulties when autonomously working out individual solutions, students may also experience challenges when using digital tools. These challenges need to be closely examined and diagnosed, and might – if necessary – have to be overcome by intervention in such a way that the students can subsequently continue working independently. Thus, if a difficulty arises in the working process, two knowledge dimensions are necessary in order to provide adapted support to students. For teaching simulations and mathematical modelling with digital tools, more specifically, these knowledge dimensions are: pedagogical content knowledge about simulation and modelling processes supported by digital tools (this includes knowledge about phases and difficulties in the working process) and pedagogical content knowledge about interventions during the mentioned processes (focussing on characteristics of suitable interventions as well as their implementation and effects on the students’ working process). The two knowledge dimensions represent cognitive dispositions as the basis for the conceptualisation and operationalisation of a so-called adaptive intervention competence for teaching simulations and mathematical modelling with digital tools. In our article, we present a domain-specific process model and distinguish different types of teacher interventions. Then we describe the design and content of a university course at two German universities aiming to promote this domain-specific professional adaptive intervention competence, among others. In a study using a quasi-experimental pre-post design (N = 146), we confirm that the structure of cognitive dispositions of adaptive intervention competence for teaching simulations and mathematical modelling with digital tools can be described empirically by a two-dimensional model. In addition, the effectiveness of the course is examined and confirmed quantitatively. Finally, the results are discussed, especially against the background of the sample and the research design, and conclusions are derived for possibilities of promoting professional adaptive intervention competence in university courses.
Ó. Blasco and S. Pott showed that the supremum of operator norms over L\(^{2}\) of all bicommutators (with the same symbol) of one-parameter Haar multipliers dominates the biparameter dyadic product BMO norm of the symbol itself. In the present work we extend this result to the Bloom setting, and to any exponent 1 < p < ∞. The main tool is a new characterization in terms of paraproducts and two-weight John–Nirenberg inequalities for dyadic product BMO in the Bloom setting. We also extend our results to the whole scale of indexed spaces between little bmo and product BMO in the general multiparameter setting, with the appropriate iterated commutator in each case.
In this work, we consider impulsive dynamical systems evolving on an infinite-dimensional space and subjected to external perturbations. We look for stability conditions that guarantee the input-to-state stability for such systems. Our new dwell-time conditions allow the situation, where both continuous and discrete dynamics can be unstable simultaneously. Lyapunov like methods are developed for this purpose. Illustrative finite and infinite dimensional examples are provided to demonstrate the application of the main results. These examples cannot be treated by any other published approach and demonstrate the effectiveness of our results.
In this paper we study properties of the Laplace approximation of the posterior distribution arising in nonlinear Bayesian inverse problems. Our work is motivated by Schillings et al. (Numer Math 145:915–971, 2020. https://doi.org/10.1007/s00211-020-01131-1), where it is shown that in such a setting the Laplace approximation error in Hellinger distance converges to zero in the order of the noise level. Here, we prove novel error estimates for a given noise level that also quantify the effect due to the nonlinearity of the forward mapping and the dimension of the problem. In particular, we are interested in settings in which a linear forward mapping is perturbed by a small nonlinear mapping. Our results indicate that in this case, the Laplace approximation error is of the size of the perturbation. The paper provides insight into Bayesian inference in nonlinear inverse problems, where linearization of the forward mapping has suitable approximation properties.
In financial mathematics, it is a typical approach to approximate financial markets operating in discrete time by continuous-time models such as the Black Scholes model. Fitting this model gives rise to difficulties due to the discrete nature of market data. We thus model the pricing process of financial derivatives by the Black Scholes equation, where the volatility is a function of a finite number of random variables. This reflects an influence of uncertain factors when determining volatility. The aim is to quantify the effect of this uncertainty when computing the price of derivatives. Our underlying method is the generalized Polynomial Chaos (gPC) method in order to numerically compute the uncertainty of the solution by the stochastic Galerkin approach and a finite difference method. We present an efficient numerical variation of this method, which is based on a machine learning technique, the so-called Bi-Fidelity approach. This is illustrated with numerical examples.
We construct a foliation of an asymptotically flat end of a Riemannian manifold by hypersurfaces which are critical points of a natural functional arising in potential theory. These hypersurfaces are perturbations of large coordinate spheres, and they admit solutions of a certain over-determined boundary value problem involving the Laplace–Beltrami operator. In a key step we must invert the Dirichlet-to-Neumann operator, highlighting the nonlocal nature of our problem.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
A sequentialquadratic Hamiltonian schemefor solving open-loop differential Nash games is proposed and investigated. This method is formulated in the framework of the Pontryagin maximum principle and represents an efficient and robust extension of the successive approximations strategy for solving optimal control problems. Theoretical results are presented that prove the well-posedness of the proposed scheme, and results of numerical experiments are reported that successfully validate its computational performance.
We prove a sharp Bernstein-type inequality for complex polynomials which are positive and satisfy a polynomial growth condition on the positive real axis. This leads to an improved upper estimate in the recent work of Culiuc and Treil (Int. Math. Res. Not. 2019: 3301–3312, 2019) on the weighted martingale Carleson embedding theorem with matrix weights. In the scalar case this new upper bound is optimal.
Nowadays, science, technology, engineering, and mathematics (STEM) play a critical role in a nation’s global competitiveness and prosperity. Thus, there is a need to educate students in these subjects to meet the current and future demands of personal life and society. While applications, especially in science, engineering, and technology, are directly obvious, mathematics underpins the other STEM disciplines. It is recognized that mathematics is the foundation for all other STEM disciplines; the role of mathematics in classrooms is not clear yet. Therefore, the question arises: What is the current role of mathematics in secondary STEM classrooms? To answer this question, we conducted a systematic literature review based on three publication databases (Web of Science, ERIC, and EBSCO Teacher Referral Center). This literature review paper is intended to contribute to the current state of the role of mathematics in STEM education in secondary classrooms. Through the search, starting with 1910 documents, only 14 eligible documents were found. In these, mathematics is often seen as a minor matter and a means to an end in the eyes of science educators. From this, we conclude that the role of mathematics in the STEM classroom should be further strengthened. Overall, the paper highlights a major research gap, and proposes possible initial solutions to close it.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
Composite optimization problems, where the sum of a smooth and a merely lower semicontinuous function has to be minimized, are often tackled numerically by means of proximal gradient methods as soon as the lower semicontinuous part of the objective function is of simple enough structure. The available convergence theory associated with these methods (mostly) requires the derivative of the smooth part of the objective function to be (globally) Lipschitz continuous, and this might be a restrictive assumption in some practically relevant scenarios. In this paper, we readdress this classical topic and provide convergence results for the classical (monotone) proximal gradient method and one of its nonmonotone extensions which are applicable in the absence of (strong) Lipschitz assumptions. This is possible since, for the price of forgoing convergence rates, we omit the use of descent-type lemmas in our analysis.
Let (ϕ\(_t\))\(_{t≥0}\) be a semigroup of holomorphic functions in the unit disk \(\mathbb {D}\) and K a compact subset of \(\mathbb {D}\). We investigate the conditions under which the backward orbit of K under the semigroup exists. Subsequently, the geometric characteristics, as well as, potential theoretic quantities for the backward orbit of K are examined. More specifically, results are obtained concerning the asymptotic behavior of its hyperbolic area and diameter, the harmonic measure and the capacity of the condenser that K forms with the unit disk.
Coisotropic algebras consist of triples of algebras for which a reduction can be defined and unify in a very algebraic fashion coisotropic reduction in several settings. In this paper, we study the theory of (formal) deformation of coisotropic algebras showing that deformations are governed by suitable coisotropic DGLAs. We define a deformation functor and prove that it commutes with reduction. Finally, we study the obstructions to existence and uniqueness of coisotropic algebras and present some geometric examples.
The article deals with the pedagogical content knowledge of mathematical modelling as part of the professional competence of pre-service teachers. With the help of a test developed for this purpose from a conceptual model, we examine whether this pedagogical content knowledge can be promoted in its different facets—especially knowledge about modelling tasks and about interventions—by suitable university seminars. For this purpose, the test was administered to three groups in a seminar for the teaching of mathematical modelling: (1) to those respondents who created their own modelling tasks for use with students, (2) to those trained to intervene in mathematical modelling processes, and (3) participating students who are not required to address mathematical modelling. The findings of the study—based on variance analysis—indicate that certain facets (knowledge of modelling tasks, modelling processes, and interventions) have increased significantly in both experimental groups but to varying degrees. By contrast, pre-service teachers in the control group demonstrated no significant change to their level of pedagogical content knowledge.
A reformulation of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint has gained some popularity during the last few years. Due to the special structure of the constraints, the reformulation violates many standard assumptions and therefore is often solved using specialized algorithms. In contrast to this, we investigate the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem. We prove global convergence towards an (essentially strongly) stationary point under a suitable problem-tailored quasinormality constraint qualification. Numerical experiments illustrating the performance of the method in comparison to regularization-based approaches are provided.
Chemotaxis describes the movement of an organism, such as single or multi-cellular organisms and bacteria, in response to a chemical stimulus. Two widely used models to describe the phenomenon are the celebrated Keller–Segel equation and a chemotaxis kinetic equation. These two equations describe the organism's movement at the macro- and mesoscopic level, respectively, and are asymptotically equivalent in the parabolic regime. The way in which the organism responds to a chemical stimulus is embedded in the diffusion/advection coefficients of the Keller–Segel equation or the turning kernel of the chemotaxis kinetic equation. Experiments are conducted to measure the time dynamics of the organisms' population level movement when reacting to certain stimulation. From this, one infers the chemotaxis response, which constitutes an inverse problem. In this paper, we discuss the relation between both the macro- and mesoscopic inverse problems, each of which is associated with two different forward models. The discussion is presented in the Bayesian framework, where the posterior distribution of the turning kernel of the organism population is sought. We prove the asymptotic equivalence of the two posterior distributions.
In this paper, we prove an asymptotic formula for the sum of the values of the periodic zeta-function at the nontrivial zeros of the Riemann zeta-function (up to some height) which are symmetrical on the real line and the critical line. This is an extension of the previous results due to Garunkštis, Kalpokas, and, more recently, Sowa. Whereas Sowa's approach was assuming the yet unproved Riemann hypothesis, our result holds unconditionally.
A basic mental model (BMM—in German ‘Grundvorstellung’) of a mathematical concept is a content-related interpretation that gives meaning to this concept. This paper defines normative and individual BMMs and concretizes them using the integral as an example. Four BMMs are developed about the concept of definite integral, sometimes used in specific teaching approaches: the BMMs of area, reconstruction, average, and accumulation. Based on theoretical work, in this paper we ask how these BMMs could be identified empirically. A test instrument was developed, piloted, validated and applied with 428 students in first-year mathematics courses. The test results show that the four normative BMMs of the integral can be detected and separated empirically. Moreover, the results allow a comparison of the existing individual BMMs and the requested normative BMMs. Consequences for future developments are discussed.
We derive a multi-species BGK model with velocity-dependent collision frequency for a non-reactive, multi-component gas mixture. The model is derived by minimizing a weighted entropy under the constraint that the number of particles of each species, total momentum, and total energy are conserved. We prove that this minimization problem admits a unique solution for very general collision frequencies. Moreover, we prove that the model satisfies an H-Theorem and characterize the form of equilibrium.
Sequential optimality conditions for cardinality-constrained optimization problems with applications
(2021)
Recently, a new approach to tackle cardinality-constrained optimization problems based on a continuous reformulation of the problem was proposed. Following this approach, we derive a problem-tailored sequential optimality condition, which is satisfied at every local minimizer without requiring any constraint qualification. We relate this condition to an existing M-type stationary concept by introducing a weak sequential constraint qualification based on a cone-continuity property. Finally, we present two algorithmic applications: We improve existing results for a known regularization method by proving that it generates limit points satisfying the aforementioned optimality conditions even if the subproblems are only solved inexactly. And we show that, under a suitable Kurdyka–Łojasiewicz-type assumption, any limit point of a standard (safeguarded) multiplier penalty method applied directly to the reformulated problem also satisfies the optimality condition. These results are stronger than corresponding ones known for the related class of mathematical programs with complementarity constraints.
This paper is devoted to the numerical analysis of non-smooth ensemble optimal control problems governed by the Liouville (continuity) equation that have been originally proposed by R.W. Brockett with the purpose of determining an efficient and robust control strategy for dynamical systems. A numerical methodology for solving these problems is presented that is based on a non-smooth Lagrange optimization framework where the optimal controls are characterized as solutions to the related optimality systems. For this purpose, approximation and solution schemes are developed and analysed. Specifically, for the approximation of the Liouville model and its optimization adjoint, a combination of a Kurganov–Tadmor method, a Runge–Kutta scheme, and a Strang splitting method are discussed. The resulting optimality system is solved by a projected semi-smooth Krylov–Newton method. Results of numerical experiments are presented that successfully validate the proposed framework.
Risk measures are commonly used to prepare for a prospective occurrence of an adverse event. If we are concerned with discrete risk phenomena such as counts of natural disasters, counts of infections by a serious disease, or counts of certain economic events, then the required risk forecasts are to be computed for an underlying count process. In practice, however, the discrete nature of count data is sometimes ignored and risk forecasts are calculated based on Gaussian time series models. But even if methods from count time series analysis are used in an adequate manner, the performance of risk forecasting is affected by estimation uncertainty as well as certain discreteness phenomena. To get a thorough overview of the aforementioned issues in risk forecasting of count processes, a comprehensive simulation study was done considering a broad variety of risk measures and count time series models. It becomes clear that Gaussian approximate risk forecasts substantially distort risk assessment and, thus, should be avoided. In order to account for the apparent estimation uncertainty in risk forecasting, we use bootstrap approaches for count time series. The relevance and the application of the proposed approaches are illustrated by real data examples about counts of storm surges and counts of financial transactions.
We consider the Bathnagar–Gross–Krook (BGK) model, an approximation of the Boltzmann equation, describing the time evolution of a single momoatomic rarefied gas and satisfying the same two main properties (conservation properties and entropy inequality). However, in practical applications, one often has to deal with two additional physical issues. First, a gas often does not consist of only one species, but it consists of a mixture of different species. Second, the particles can store energy not only in translational degrees of freedom but also in internal degrees of freedom such as rotations or vibrations (polyatomic molecules). Therefore, here, we will present recent BGK models for gas mixtures for mono- and polyatomic particles and the existing mathematical theory for these models.
The bounded input bounded output (BIBO) stability for a nonlinear Caputo fractional system with time‐varying bounded delay and nonlinear output is studied. Utilizing the Razumikhin method, Lyapunov functions and appropriate fractional derivatives of Lyapunov functions some new bounded input bounded output stability criteria are derived. Also, explicit and independent on the initial time bounds of the output are provided. Uniform BIBO stability and uniform BIBO stability with input threshold are studied. A numerical simulation is carried out to show the system's dynamic response, and demonstrate the effectiveness of our theoretical results.
One of the major motivations for the analysis and modeling of time series data is the forecasting of future outcomes. The use of interval forecasts instead of point forecasts allows us to incorporate the apparent forecast uncertainty. When forecasting count time series, one also has to account for the discreteness of the range, which is done by using coherent prediction intervals (PIs) relying on a count model. We provide a comprehensive performance analysis of coherent PIs for diverse types of count processes. We also compare them to approximate PIs that are computed based on a Gaussian approximation. Our analyses rely on an extensive simulation study. It turns out that the Gaussian approximations do considerably worse than the coherent PIs. Furthermore, special characteristics such as overdispersion, zero inflation, or trend clearly affect the PIs' performance. We conclude by presenting two empirical applications of PIs for count time series: the demand for blood bags in a hospital and the number of company liquidations in Germany.
We investigate the convergence of the proximal gradient method applied to control problems with non-smooth and non-convex control cost. Here, we focus on control cost functionals that promote sparsity, which includes functionals of L\(^{p}\)-type for p\in [0,1). We prove stationarity properties of weak limit points of the method. These properties are weaker than those provided by Pontryagin’s maximum principle and weaker than L-stationarity.
In this paper we derive new results on multivariate extremes and D-norms. In particular we establish new characterizations of the multivariate max-domain of attraction property. The limit distribution of certain multivariate exceedances above high thresholds is derived, and the distribution of that generator of a D-norm on R\(^{d}\), whose components sum up to d, is obtained. Finally we introduce exchangeable D-norms and show that the set of exchangeable D-norms is a simplex.
We investigate eigenvalues of the zero-divisor graph Γ(R) of finite commutative rings R and study the interplay between these eigenvalues, the ring-theoretic properties of R and the graph-theoretic properties of Γ(R). The graph Γ(R) is defined as the graph with vertex set consisting of all nonzero zero-divisors of R and adjacent vertices x, y whenever xy=0. We provide formulas for the nullity of Γ(R), i.e., the multiplicity of the eigenvalue 0 of Γ(R). Moreover, we precisely determine the spectra of \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) and \(\Gamma ({\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p \times {\mathbb {Z}}_p)\) for a prime number p. We introduce a graph product ×Γ with the property that Γ(R)≅Γ(R\(_1\))×Γ⋯×ΓΓ(R\(_r\)) whenever R≅R\(_1\)×⋯×R\(_r\). With this product, we find relations between the number of vertices of the zero-divisor graph Γ(R), the compressed zero-divisor graph, the structure of the ring R and the eigenvalues of Γ(R).
In this article we collect some recent results on the global existence of weak solutions for diffuse interface models involving incompressible magnetic fluids. We consider both the cases of matched and unmatched specific densities. For the model involving fluids with identical densities we consider the free energy density to be a double well potential whereas for the unmatched density case it is crucial to work with a singular free energy density.
Circadian endogenous clocks of eukaryotic organisms are an established and rapidly developing research field. To investigate and simulate in an effective model the effect of external stimuli on such clocks and their components we developed a software framework for download and simulation. The application is useful to understand the different involved effects in a mathematical simple and effective model. This concerns the effects of Zeitgebers, feedback loops and further modifying components. We start from a known mathematical oscillator model, which is based on experimental molecular findings. This is extended with an effective framework that includes the impact of external stimuli on the circadian oscillations including high dose pharmacological treatment. In particular, the external stimuli framework defines a systematic procedure by input-output-interfaces to couple different oscillators. The framework is validated by providing phase response curves and ranges of entrainment. Furthermore, Aschoffs rule is computationally investigated. It is shown how the external stimuli framework can be used to study biological effects like points of singularity or oscillators integrating different signals at once. The mathematical framework and formalism is generic and allows to study in general the effect of external stimuli on oscillators and other biological processes. For an easy replication of each numerical experiment presented in this work and an easy implementation of the framework the corresponding Mathematica files are fully made available. They can be downloaded at the following link: https://www.biozentrum.uni-wuerzburg.de/bioinfo/computing/circadian/.
Optimization problems with composite functions consist of an objective function which is the sum of a smooth and a (convex) nonsmooth term. This particular structure is exploited by the class of proximal gradient methods and some of their generalizations like proximal Newton and quasi-Newton methods. The current literature on these classes of methods almost exclusively considers the case where also the smooth term is convex. Here we present a globalized proximal Newton-type method which allows the smooth term to be nonconvex. The method is shown to have nice global and local convergence properties, and some numerical results indicate that this method is very promising also from a practical point of view.
This paper is devoted to a theoretical and numerical investigation of Nash equilibria and Nash bargaining problems governed by bilinear (input-affine) differential models. These systems with a bilinear state-control structure arise in many applications in, e.g., biology, economics, physics, where competition between different species, agents, and forces needs to be modelled. For this purpose, the concept of Nash equilibria (NE) appears appropriate, and the building blocks of the resulting differential Nash games are different control functions associated with different players that pursue different non-cooperative objectives. In this framework, existence of Nash equilibria is proved and computed with a semi-smooth Newton scheme combined with a relaxation method. Further, a related Nash bargaining (NB) problem is discussed. This aims at determining an improvement of all players’ objectives with respect to the Nash equilibria. Results of numerical experiments successfully demonstrate the effectiveness of the proposed NE and NB computational framework.
In this paper we introduce a theoretical framework concerned with fostering functional thinking in Grade 8 students by utilizing digital technologies. This framework is meant to be used to guide the systematic variation of tasks for implementation in the classroom while using digital technologies. Examples of problems and tasks illustrate this process. Additionally, results of an empirical investigation with Grade 8 students, which focusses on the students’ skills with digital technologies, how they utilize these tools when engaging with the developed tasks, and how they influence their functional thinking, are presented. The research aim is to investigate in which way tasks designed according to the theoretical framework could promote functional thinking while using digital technologies in the sense of the operative principle. The results show that the developed framework — Function-Operation-Matrix — is a sound basis for initiating students’ actions in the sense of the operative principle, to foster the development of functional thinking in its three aspects, namely, assignment, co-variation and object, and that digital technologies can support this process in a meaningful way.
A Lagrange multiplier method for semilinear elliptic state constrained optimal control problems
(2020)
In this paper we apply an augmented Lagrange method to a class of semilinear ellip-tic optimal control problems with pointwise state constraints. We show strong con-vergence of subsequences of the primal variables to a local solution of the original problem as well as weak convergence of the adjoint states and weak-* convergence of the multipliers associated to the state constraint. Moreover, we show existence of stationary points in arbitrary small neighborhoods of local solutions of the original problem. Additionally, various numerical results are presented.
Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.
The characterization and numerical solution of two non-smooth optimal control problems governed by a Fokker–Planck (FP) equation are investigated in the framework of the Pontryagin maximum principle (PMP). The two FP control problems are related to the problem of determining open- and closed-loop controls for a stochastic process whose probability density function is modelled by the FP equation. In both cases, existence and PMP characterisation of optimal controls are proved, and PMP-based numerical optimization schemes are implemented that solve the PMP optimality conditions to determine the controls sought. Results of experiments are presented that successfully validate the proposed computational framework and allow to compare the two control strategies.
For an arbitrary complex number a≠0 we consider the distribution of values of the Riemann zeta-function ζ at the a-points of the function Δ which appears in the functional equation ζ(s)=Δ(s)ζ(1−s). These a-points δa are clustered around the critical line 1/2+i\(\mathbb {R}\) which happens to be a Julia line for the essential singularity of ζ at infinity. We observe a remarkable average behaviour for the sequence of values ζ(δ\(_a\)).
We consider a class of “wild” initial data to the compressible Euler system that give rise to infinitely many admissible weak solutions via the method of convex integration. We identify the closure of this class in the natural L1-topology and show that its complement is rather large, specifically it is an open dense set.
We are interested in studying a system coupling the compressible Navier–Stokes equations with an elastic structure located at the boundary of the fluid domain. Initially the fluid domain is rectangular and the beam is located on the upper side of the rectangle. The elastic structure is modeled by an Euler–Bernoulli damped beam equation. We prove the local in time existence of strong solutions for that coupled system.
In this paper we consider the class (θA, B) of parameter-dependent linear systems given by matrices A ∈ ℂ\(^{nxn}\) and B ∈ ℂ\(^{nxm}\). This class is of interest for several applications and the frequently met task for such systems is to steer the origin toward a given target family f(θ) by using an input that is independent from the parameter. This paper provides a collection of necessary and sufficient conditions for ensemble reachability for these systems.
In forecasting count processes, practitioners often ignore the discreteness of counts and compute forecasts based on Gaussian approximations instead. For both central and non-central point forecasts, and for various types of count processes, the performance of such approximate point forecasts is analyzed. The considered data-generating processes include different autoregressive schemes with varying model orders, count models with overdispersion or zero inflation, counts with a bounded range, and counts exhibiting trend or seasonality. We conclude that Gaussian forecast approximations should be avoided.