Filtern
Volltext vorhanden
- ja (197)
Gehört zur Bibliographie
- ja (197)
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (197) (entfernen)
Schlagworte
- Optimale Kontrolle (11)
- Optimierung (8)
- optimal control (8)
- Extremwertstatistik (7)
- Nash-Gleichgewicht (7)
- Newton-Verfahren (6)
- Mathematik (5)
- Nichtlineare Optimierung (5)
- Extremwerttheorie (4)
- Nichtglatte Optimierung (4)
Institut
- Institut für Mathematik (197) (entfernen)
Sonstige beteiligte Institutionen
ResearcherID
- C-2593-2016 (1)
EU-Projektnummer / Contract (GA) number
- 304617 (2)
Providing adaptive, independence-preserving and theory-guided support to students in dealing with real-world problems in mathematics lessons is a major challenge for teachers in their professional practice. This paper examines this challenge in the context of simulations and mathematical modelling with digital tools: in addition to mathematical difficulties when autonomously working out individual solutions, students may also experience challenges when using digital tools. These challenges need to be closely examined and diagnosed, and might – if necessary – have to be overcome by intervention in such a way that the students can subsequently continue working independently. Thus, if a difficulty arises in the working process, two knowledge dimensions are necessary in order to provide adapted support to students. For teaching simulations and mathematical modelling with digital tools, more specifically, these knowledge dimensions are: pedagogical content knowledge about simulation and modelling processes supported by digital tools (this includes knowledge about phases and difficulties in the working process) and pedagogical content knowledge about interventions during the mentioned processes (focussing on characteristics of suitable interventions as well as their implementation and effects on the students’ working process). The two knowledge dimensions represent cognitive dispositions as the basis for the conceptualisation and operationalisation of a so-called adaptive intervention competence for teaching simulations and mathematical modelling with digital tools. In our article, we present a domain-specific process model and distinguish different types of teacher interventions. Then we describe the design and content of a university course at two German universities aiming to promote this domain-specific professional adaptive intervention competence, among others. In a study using a quasi-experimental pre-post design (N = 146), we confirm that the structure of cognitive dispositions of adaptive intervention competence for teaching simulations and mathematical modelling with digital tools can be described empirically by a two-dimensional model. In addition, the effectiveness of the course is examined and confirmed quantitatively. Finally, the results are discussed, especially against the background of the sample and the research design, and conclusions are derived for possibilities of promoting professional adaptive intervention competence in university courses.
The goal of this thesis is to study the topological and algebraic properties of the quasiconformal automorphism groups of simply and multiply connected domains in the complex plain, in which the quasiconformal automorphism groups are endowed with the supremum metric on the underlying domain. More precisely, questions concerning central topological properties such as (local) compactness, (path)-connectedness and separability and their dependence on the boundary of the corresponding domains are studied, as well as completeness with respect to the supremum metric. Moreover, special subsets of the quasiconformal automorphism group of the unit disk are investigated, and concrete quasiconformal automorphisms are constructed. Finally, a possible application of quasiconformal unit disk automorphisms to symmetric cryptography is presented, in which a quasiconformal cryptosystem is defined and studied.
For a connected real Lie group G we consider the canonical standard-ordered star product arising from the canonical global symbol calculus based on the half-commutator connection of G. This star product trivially converges on polynomial functions on T\(^*\)G thanks to its homogeneity. We define a nuclear Fréchet algebra of certain analytic functions on T\(^*\)G, for which the standard-ordered star product is shown to be a well-defined continuous multiplication, depending holomorphically on the deformation parameter \(\hbar\). This nuclear Fréchet algebra is realized as the completed (projective) tensor product of a nuclear Fréchet algebra of entire functions on G with an appropriate nuclear Fréchet algebra of functions on \({\mathfrak {g}}^*\). The passage to the Weyl-ordered star product, i.e. the Gutt star product on T\(^*\)G, is shown to preserve this function space, yielding the continuity of the Gutt star product with holomorphic dependence on \(\hbar\).
Let (ϕ\(_t\))\(_{t≥0}\) be a semigroup of holomorphic functions in the unit disk \(\mathbb {D}\) and K a compact subset of \(\mathbb {D}\). We investigate the conditions under which the backward orbit of K under the semigroup exists. Subsequently, the geometric characteristics, as well as, potential theoretic quantities for the backward orbit of K are examined. More specifically, results are obtained concerning the asymptotic behavior of its hyperbolic area and diameter, the harmonic measure and the capacity of the condenser that K forms with the unit disk.
This paper studies differential graded modules and representations up to homotopy of Lie n-algebroids, for general \(n\in {\mathbb {N}}\). The adjoint and coadjoint modules are described, and the corresponding split versions of the adjoint and coadjoint representations up to homotopy are explained. In particular, the case of Lie 2-algebroids is analysed in detail. The compatibility of a Poisson bracket with the homological vector field of a Lie n-algebroid is shown to be equivalent to a morphism from the coadjoint module to the adjoint module, leading to an alternative characterisation of non-degeneracy of higher Poisson structures. Moreover, the Weil algebra of a Lie n-algebroid is computed explicitly in terms of splittings, and representations up to homotopy of Lie n-algebroids are used to encode decomposed VB-Lie n-algebroid structures on double vector bundles.
We analyze the mathematical models of two classes of physical phenomena. The first class of phenomena we consider is the interaction between one or more insulating rigid bodies and an electrically conducting fluid, inside of which the bodies are contained, as well as the electromagnetic fields trespassing both of the materials. We take into account both the cases of incompressible and compressible fluids. In both cases our main result yields the existence of weak solutions to the associated system of partial differential equations, respectively. The proofs of these results are built upon hybrid discrete-continuous approximation schemes: Parts of the systems are discretized with respect to time in order to deal with the solution-dependent test functions in the induction equation. The remaining parts are treated as continuous equations on the small intervals between consecutive discrete time points, allowing us to employ techniques which do not transfer to the discretized setting. Moreover, the solution-dependent test functions in the momentum equation are handled via the use of classical penalization methods.
The second class of phenomena we consider is the evolution of a magnetoelastic material. Here too, our main result proves the existence of weak solutions to the corresponding system of partial differential equations. Its proof is based on De Giorgi's minimizing movements method, in which the system is discretized in time and, at each discrete time point, a minimization problem is solved, the associated Euler-Lagrange equations of which constitute a suitable approximation of the original equation of motion and magnetic force balance. The construction of such a minimization problem is made possible by the realization that, already on the continuous level, both of these equations can be written in terms of the same energy and dissipation potentials. The functional for the discrete minimization problem can then be constructed on the basis of these potentials.
We extend Bourgain’s bound for the order of growth of the Riemann zeta function on the critical line to Lerch zeta functions. More precisely, we prove L(λ, α, 1/2 + it) ≪ t\(^{13/84+ϵ}\) as t → ∞. For both, the Riemann zeta function as well as for the more general Lerch zeta function, it is conjectured that the right-hand side can be replaced by t\(^ϵ\) (which is the so-called Lindelöf hypothesis). The growth of an analytic function is closely related to the distribution of its zeros.
We give a collection of 16 examples which show that compositions \(g\) \(\circ\) \(f\) of well-behaved functions \(f\) and \(g\) can be badly behaved. Remarkably, in 10 of the 16 examples it suffices to take as outer function \(g\) simply a power-type or characteristic function. Such a collection of examples may serve as a source of exercises for a calculus course.
For a graph \(\Gamma\) , let K be the smallest field containing all eigenvalues of the adjacency matrix of \(\Gamma\) . The algebraic degree \(\deg (\Gamma )\) is the extension degree \([K:\mathbb {Q}]\). In this paper, we completely determine the algebraic degrees of Cayley graphs over abelian groups and dihedral groups.
Mathematical concepts are regularly used in media reports concerning the Covid-19 pandemic. These include growth models, which attempt to explain or predict the effectiveness of interventions and developments, as well as the reproductive factor. Our contribution has the aim of showing that basic mental models about exponential growth are important for understanding media reports of Covid-19. Furthermore, we highlight how the coronavirus pandemic can be used as a context in mathematics classrooms to help students understand that they can and should question media reports on their own, using their mathematical knowledge. Therefore, we first present the role of mathematical modelling in achieving these goals in general. The same relevance applies to the necessary basic mental models of exponential growth. Following this description, based on three topics, namely, investigating the type of growth, questioning given course models, and determining exponential factors at different times, we show how the presented theoretical aspects manifest themselves in teaching examples when students are given the task of reflecting critically on existing media reports. Finally, the value of the three topics regarding the intended goals is discussed and conclusions concerning the possibilities and limits of their use in schools are drawn.
We generalize a theorem by Titchmarsh about the mean value of Hardy’s \(Z\)-function at the Gram points to the Hecke \(L\)-functions, which in turn implies the weak Gram law for them. Instead of proceeding analogously to Titchmarsh with an approximate functional equation we employ a different method using contour integration.
The concept of derivative is characterised with reference to four basic mental models. These are described as theoretical constructs based on theoretical considerations. The four basic mental models—local rate of change, tangent slope, local linearity and amplification factor—are not only quantified empirically but are also validated. To this end, a test instrument for measuring students’ characteristics of basic mental models is presented and analysed regarding quality criteria.
Mathematics students (n = 266) were tested with this instrument. The test results show that the four basic mental models of the derivative can be reconstructed among the students with different characteristics. The tangent slope has the highest agreement values across all tasks. The agreement on explanations based on the basic mental model of rate of change is not as strongly established among students as one would expect due to framework settings in the school system by means of curricula and educational standards. The basic mental model of local linearity plays a rather subordinate role. The amplification factor achieves the lowest agreement values. In addition, cluster analysis was conducted to identify different subgroups of the student population. Moreover, the test results can be attributed to characteristics of the task types as well as to the students’ previous experiences from mathematics classes by means of qualitative interpretation. These and other results of students’ basic mental models of the derivative are presented and discussed in detail.
Composite optimization problems, where the sum of a smooth and a merely lower semicontinuous function has to be minimized, are often tackled numerically by means of proximal gradient methods as soon as the lower semicontinuous part of the objective function is of simple enough structure. The available convergence theory associated with these methods (mostly) requires the derivative of the smooth part of the objective function to be (globally) Lipschitz continuous, and this might be a restrictive assumption in some practically relevant scenarios. In this paper, we readdress this classical topic and provide convergence results for the classical (monotone) proximal gradient method and one of its nonmonotone extensions which are applicable in the absence of (strong) Lipschitz assumptions. This is possible since, for the price of forgoing convergence rates, we omit the use of descent-type lemmas in our analysis.
Nowadays, science, technology, engineering, and mathematics (STEM) play a critical role in a nation’s global competitiveness and prosperity. Thus, there is a need to educate students in these subjects to meet the current and future demands of personal life and society. While applications, especially in science, engineering, and technology, are directly obvious, mathematics underpins the other STEM disciplines. It is recognized that mathematics is the foundation for all other STEM disciplines; the role of mathematics in classrooms is not clear yet. Therefore, the question arises: What is the current role of mathematics in secondary STEM classrooms? To answer this question, we conducted a systematic literature review based on three publication databases (Web of Science, ERIC, and EBSCO Teacher Referral Center). This literature review paper is intended to contribute to the current state of the role of mathematics in STEM education in secondary classrooms. Through the search, starting with 1910 documents, only 14 eligible documents were found. In these, mathematics is often seen as a minor matter and a means to an end in the eyes of science educators. From this, we conclude that the role of mathematics in the STEM classroom should be further strengthened. Overall, the paper highlights a major research gap, and proposes possible initial solutions to close it.
This thesis, first, is devoted to the theoretical and numerical investigation of an augmented Lagrangian method for the solution of optimization problems with geometric constraints, subsequently, as well as constrained structured optimization problems featuring a composite objective function and set-membership constraints. It is then concerned to convergence and rate-of-convergence analysis of proximal gradient methods for the composite optimization problems in the presence of the Kurdyka--{\L}ojasiewicz property without global Lipschitz assumption.
Ó. Blasco and S. Pott showed that the supremum of operator norms over L\(^{2}\) of all bicommutators (with the same symbol) of one-parameter Haar multipliers dominates the biparameter dyadic product BMO norm of the symbol itself. In the present work we extend this result to the Bloom setting, and to any exponent 1 < p < ∞. The main tool is a new characterization in terms of paraproducts and two-weight John–Nirenberg inequalities for dyadic product BMO in the Bloom setting. We also extend our results to the whole scale of indexed spaces between little bmo and product BMO in the general multiparameter setting, with the appropriate iterated commutator in each case.
Bivariate copula monitoring
(2022)
The assumption of multivariate normality underlying the Hotelling T\(^{2}\) chart is often violated for process data. The multivariate dependency structure can be separated from marginals with the help of copula theory, which permits to model association structures beyond the covariance matrix. Copula‐based estimation and testing routines have reached maturity regarding a variety of practical applications. We have constructed a rich design matrix for the comparison of the Hotelling T\(^{2}\) chart with the copula test by Verdier and the copula test by Vuong, which allows for weighting the observations adaptively. Based on the design matrix, we have conducted a large and computationally intensive simulation study. The results show that the copula test by Verdier performs better than Hotelling T\(^{2}\) in a large variety of out‐of‐control cases, whereas the weighted Vuong scheme often fails to provide an improvement.
In financial mathematics, it is a typical approach to approximate financial markets operating in discrete time by continuous-time models such as the Black Scholes model. Fitting this model gives rise to difficulties due to the discrete nature of market data. We thus model the pricing process of financial derivatives by the Black Scholes equation, where the volatility is a function of a finite number of random variables. This reflects an influence of uncertain factors when determining volatility. The aim is to quantify the effect of this uncertainty when computing the price of derivatives. Our underlying method is the generalized Polynomial Chaos (gPC) method in order to numerically compute the uncertainty of the solution by the stochastic Galerkin approach and a finite difference method. We present an efficient numerical variation of this method, which is based on a machine learning technique, the so-called Bi-Fidelity approach. This is illustrated with numerical examples.