519 Wahrscheinlichkeiten, angewandte Mathematik
Refine
Has Fulltext
- yes (12)
Is part of the Bibliography
- yes (12)
Document Type
- Doctoral Thesis (7)
- Journal article (5)
Language
- English (12) (remove)
Keywords
- A-priori-Wissen (2)
- Audit sampling (2)
- Extremwertstatistik (2)
- Konfidenzintervall (2)
- Agentenbasierte Modellierung (1)
- Anpassungstest (1)
- Approximation (1)
- Binomialverteilung (1)
- Confidence intervals (1)
- Continuous Sample Path (1)
Institute
ResearcherID
- C-2593-2016 (1)
EU-Project number / Contract (GA) number
- 304617 (2)
Mammalian embryonic development is subject to complex biological relationships that need to be understood. However, before the whole structure of development can be put together, the individual building blocks must first be understood in more detail. One of these building blocks is the second cell fate decision and describes the differentiation of cells of the inner cell mass of the embryo into epiblast and primitive endoderm cells. These cells then spatially segregate and form the subsequent bases for the embryo and yolk sac, respectively. In organoids of the inner cell mass, these two types of progenitor cells are also observed to form, and to some extent to spatially separate. This work has been devoted to these phenomena over the past three years. Plenty of studies already provide some insights into the basic mechanics of this cell differentiation, such that the first signs of epiblast and primitive endoderm differentiation, are the expression levels of transcription factors NANOG and GATA6. Here, cells with low expression of GATA6 and high expression of NANOG adopt the epiblast fate. If the expressions are reversed, a primitive endoderm cell is formed. Regarding the spatial segregation of the two cell types, it is not yet clear what mechanism leads to this. A common hypothesis suggests the differential adhesion of cell as the cause for the spatial rearrangement of cells. In this thesis however, the possibility of a global cell-cell communication is investigated. The approach chosen to study these phenomena follows the motto "mathematics is biology's next microscope". Mathematical modeling is used to transform the central gene regulatory network at the heart of this work into a system of equations that allows us to describe the temporal evolution of NANOG and GATA6 under the influence of an external signal. Special attention is paid to the derivation of new models using methods of statistical mechanics, as well as the comparison with existing models. After a detailed stability analysis the advantages of the derived model become clear by the fact that an exact relationship of the model parameters and the formation of heterogeneous mixtures of two cell types was found. Thus, the model can be easily controlled and the proportions of the resulting cell types can be estimated in advance. This mathematical model is also combined with a mechanism for global cell-cell communication, as well as a model for the growth of an organoid. It is shown that the global cell-cell communication is able to unify the formation of checkerboard patterns as well as engulfing patterns based on differently propagating signals. In addition, the influence of cell division and thus organoid growth on pattern formation is studied in detail. It is shown that this is able to contribute to the formation of clusters and, as a consequence, to breathe some randomness into otherwise perfectly sorted patterns.
In this dissertation, we develop and analyze novel optimizing feedback laws for control-affine systems with real-valued state-dependent output (or objective) functions. Given a control-affine system, our goal is to derive an output-feedback law that asymptotically stabilizes the closed-loop system around states at which the output function attains a minimum value. The control strategy has to be designed in such a way that an implementation only requires real-time measurements of the output value. Additional information, like the current system state or the gradient vector of the output function, is not assumed to be known. A method that meets all these criteria is called an extremum seeking control law. We follow a recently established approach to extremum seeking control, which is based on approximations of Lie brackets. For this purpose, the measured output is modulated by suitable highly oscillatory signals and is then fed back into the system. Averaging techniques for control-affine systems with highly oscillatory inputs reveal that the closed-loop system is driven, at least approximately, into the directions of certain Lie brackets. A suitable design of the control law ensures that these Lie brackets point into descent directions of the output function. Under suitable assumptions, this method leads to the effect that minima of the output function are practically uniformly asymptotically stable for the closed-loop system. The present document extends and improves this approach in various ways.
One of the novelties is a control strategy that does not only lead to practical asymptotic stability, but in fact to asymptotic and even exponential stability. In this context, we focus on the application of distance-based formation control in autonomous multi-agent system in which only distance measurements are available. This means that the target formations as well as the sensed variables are determined by distances. We propose a fully distributed control law, which only involves distance measurements for each individual agent to stabilize a desired formation shape, while a storage of measured data is not required. The approach is applicable to point agents in the Euclidean space of arbitrary (but finite) dimension. Under the assumption of infinitesimal rigidity of the target formations, we show that the proposed control law induces local uniform asymptotic (and even exponential) stability. A similar statement is also derived for nonholonomic unicycle agents with all-to-all communication. We also show how the findings can be used to solve extremum seeking control problems.
Another contribution is an extremum seeking control law with an adaptive dither signal. We present an output-feedback law that steers a fully actuated control-affine system with general drift vector field to a minimum of the output function. A key novelty of the approach is an adaptive choice of the frequency parameter. In this way, the task of determining a sufficiently large frequency parameter becomes obsolete. The adaptive choice of the frequency parameter also prevents finite escape times in the presence of a drift. The proposed control law does not only lead to convergence into a neighborhood of a minimum, but leads to exact convergence. For the case of an output function with a global minimum and no other critical point, we prove global convergence.
Finally, we present an extremum seeking control law for a class of nonholonomic systems. A detailed averaging analysis reveals that the closed-loop system is driven approximately into descent directions of the output function along Lie brackets of the control vector fields. Those descent directions also originate from an approximation of suitably chosen Lie brackets. This requires a two-fold approximation of Lie brackets on different time scales. The proposed method can lead to practical asymptotic stability even if the control vector fields do not span the entire tangent space. It suffices instead that the tangent space is spanned by the elements in the Lie algebra generated by the control vector fields. This novel feature extends extremum seeking by Lie bracket approximations from the class of fully actuated systems to a larger class of nonholonomic systems.
This thesis covers a wide range of results for when a random vector is in the max-domain of attraction of max-stable random vector. It states some new theoretical results in D-norm terminology, but also gives an explaination why most approaches to multivariate extremes are equivalent to this specific approach. Then it covers new methods to deal with high-dimensional extremes, ranging from dimension reduction to exploratory methods and explaining why the Huessler-Reiss model is a powerful parametric model in multivariate extremes on par with the multivariate Gaussian distribution in multivariate regular statistics. It also gives new results for estimating and inferring the multivariate extremal dependence structure, strategies for choosing thresholds and compares the behavior of local and global threshold approaches. The methods are demonstrated in an artifical simulation study, but also on German weather data.
A mathematical optimal-control tumor therapy framework consisting of radio- and anti-angiogenesis control strategies that are included in a tumor growth model is investigated. The governing system, resulting from the combination of two well established models, represents the differential constraint of a non-smooth optimal control problem that aims at reducing the volume of the tumor while keeping the radio- and anti-angiogenesis chemical dosage to a minimum. Existence of optimal solutions is proved and necessary conditions are formulated in terms of the Pontryagin maximum principle. Based on this principle, a so-called sequential quadratic Hamiltonian (SQH) method is discussed and benchmarked with an “interior point optimizer―a mathematical programming language” (IPOPT-AMPL) algorithm. Results of numerical experiments are presented that successfully validate the SQH solution scheme. Further, it is shown how to choose the optimisation weights in order to obtain treatment functions that successfully reduce the tumor volume to zero.
Statistical Procedures for modelling a random phenomenon heavily depend on the choice of a certain family of probability distributions. Frequently, this choice is governed by a good mathematical feasibility, but disregards that some distribution properties may contradict reality. At most, the choosen distribution may be considered as an approximation. The present thesis starts with a construction of distributions, which uses solely available information and yields distributions having greatest uncertainty in the sense of the maximum entropy principle. One of such distributions is the monotonic distribution, which is solely determined by its support and the mean. Although classical frequentist statistics provides estimation procedures which may incorporate prior information, such procedures are rarely considered. A general frequentist scheme for the construction of shortest confidence intervals for distribution parameters under prior information is presented. In particular, the scheme is used for establishing confidence intervals for the mean of the monotonic distribution and compared to classical procedures. Additionally, an approximative procedure for the upper bound of the support of the monotonic distribution is proposed. A core purpose of auditing sampling is the determination of confidence intervals for the mean of zero-inflated populations. The monotonic distribution is used for modelling such a population and is utilised for the procedure of a confidence interval under prior information for the mean. The results are compared to two-sided intervals of Stringer-type.
An efficient multigrid finite-differences scheme for solving elliptic Fredholm partial integro-differential equations (PIDE) is discussed. This scheme combines a second-order accurate finite difference discretization of the PIDE problem with a multigrid scheme that includes a fast multilevel integration of the Fredholm operator allowing the fast solution of the PIDE problem. Theoretical estimates of second-order accuracy and results of local Fourier analysis of convergence of the proposed multigrid scheme are presented. Results of numerical experiments validate these estimates and demonstrate optimal computational complexity of the proposed framework.
A framework for the optimal sparse-control of the probability density function of a jump-diffusion process is presented. This framework is based on the partial integro-differential Fokker-Planck (FP) equation that governs the time evolution of the probability density function of this process. In the stochastic process and, correspondingly, in the FP model the control function enters as a time-dependent coefficient. The objectives of the control are to minimize a discrete-in-time, resp. continuous-in-time, tracking functionals and its L2- and L1-costs, where the latter is considered to promote control sparsity. An efficient proximal scheme for solving these optimal control problems is considered. Results of numerical experiments are presented to validate the theoretical results and the computational effectiveness of the proposed control framework.
This article introduces a new consistent variance-based estimator called ordinal consistent partial least squares (OrdPLSc). OrdPLSc completes the family of variance-based estimators consisting of PLS, PLSc, and OrdPLS and permits to estimate structural equation models of composites and common factors if some or all indicators are measured on an ordinal categorical scale. A Monte Carlo simulation (N =500) with different population models shows that OrdPLSc provides almost unbiased estimates. If all constructs are modeled as common factors, OrdPLSc yields estimates close to those of its covariance-based counterpart, WLSMV, but is less efficient. If some constructs are modeled as composites, OrdPLSc is virtually without competition.
Extreme value theory aims at modeling extreme but rare events from a probabilistic point of view. It is well-known that so-called generalized Pareto distributions, which are briefly reviewed in Chapter 1, are the only reasonable probability distributions suited for modeling observations above a high threshold, such as waves exceeding the height of a certain dike, earthquakes having at least a certain intensity, and, after applying a simple transformation, share prices falling below some low threshold. However, there are cases for which a generalized Pareto model might fail. Therefore, Chapter 2 derives certain neighborhoods of a generalized Pareto distribution and provides several statistical tests for these neighborhoods, where the cases of observing finite dimensional data and of observing continuous functions on [0,1] are considered. By using a notation based on so-called D-norms it is shown that these tests consistently link both frameworks, the finite dimensional and the functional one. Since the derivation of the asymptotic distributions of the test statistics requires certain technical restrictions, Chapter 3 analyzes these assumptions in more detail. It provides in particular some examples of distributions that satisfy the null hypothesis and of those that do not. Since continuous copula processes are crucial tools for the functional versions of the proposed tests, it is also discussed whether those copula processes actually exist for a given set of data. Moreover, some practical advice is given how to choose the free parameters incorporated in the test statistics. Finally, a simulation study in Chapter 4 compares the in total three different test statistics with another test found in the literature that has a similar null hypothesis. This thesis ends with a short summary of the results and an outlook to further open questions.
Measurements of the centrality and rapidity dependence of inclusive jet production in \(\sqrt{^SNN}\)=5.02 TeV proton–lead (p+Pb) collisions and the jet cross-section in \(\sqrt{s}\)=2.76 TeV proton–proton collisions are presented. These quantities are measured in datasets corresponding to an integrated luminosity of 27.8 nb\(^{−1}\) and 4.0 pb\(^{−1}\), respectively, recorded with the ATLAS detector at the Large Hadron Collider in 2013. The p+Pb collision centrality was characterised using the total transverse energy measured in the pseudorapidity interval −4.9<η<−3.2 in the direction of the lead beam. Results are presented for the double-differential per-collision yields as a function of jet rapidity and transverse momentum (\(p_T\)) for minimum-bias and centrality-selected p+Pb collisions, and are compared to the jet rate from the geometric expectation. The total jet yield in minimum-bias events is slightly enhanced above the expectation in a \(p_T\)-dependent manner but is consistent with the expectation within uncertainties. The ratios of jet spectra from different centrality selections show a strong modification of jet production at all \(p_T\) at forward rapidities and for large \(p_T\) at mid-rapidity, which manifests as a suppression of the jet yield in central events and an enhancement in peripheral events. These effects imply that the factorisation between hard and soft processes is violated at an unexpected level in proton–nucleus collisions. Furthermore, the modifications at forward rapidities are found to be a function of the total jet energy only, implying that the violations may have a simple dependence on the hard parton–parton kinematics.