@phdthesis{Zilian2014, author = {Zilian, David}, title = {Neuartige, empirische Scoring-Modelle f{\"u}r Protein-Ligand-Komplexe und computergest{\"u}tzte Entwicklung von Hsp70-Inhibitoren}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-105055}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2014}, abstract = {Techniken des computergest{\"u}tzten Wirkstoffdesigns spielen eine wichtige Rolle bei der Entwicklung neuer Wirkstoffe. Die vorliegende Arbeit befasst sich sowohl mit der Entwicklung als auch mit der praktischen Anwendung von Methoden des strukturbasierten Wirkstoffdesigns. Die Arbeit glieder sich daher in zwei Teile. Der erste Teil besch{\"a}ftigt sich mit der Entwicklung von empirischen Scoring-Funktionen, die eine Schl{\"u}sselrolle im strukturbasierten computergest{\"u}tzen Wirkstoffdesign einnehmen. Grundlage dieser Arbeiten sind die empirischen Deskriptoren und Scoring-Funktionen aus dem SFCscore-Programmpaket. Dabei wurde zun{\"a}chst untersucht, wie sich die Zusammensetzung der Trainingsdaten auf die Vorhersagen von empirischen Scoring-Funktionen auswirkt. Durch die gezielte Zusammenstellung eines neuen Trainingsdatensatzes wurde versucht, die Spannweite der Vorhersagen zu vergr{\"o}ßern, um so vor allem eine bessere Erkennung von hoch- und niedrig-affinen Komplexen zu erreichen. Die resultierende Funktion erzielte vor allem im niedrig-affinen Bereich verbesserte Vorhersagen. Der zweite Themenkomplex besch{\"a}ftigt sich ebenfalls mit der verbesserten Separierung von aktiven und inaktiven Verbindungen. Durch den Einsatz der Machine Learning-Methode RandomForest wurden dazu Klassifizierungsmodelle abgeleitet, die im Unterschied zu den klassischen Scoring-Funktionen keinen genauen Score liefern, sondern die Verbindungen nach ihrer potentiellen Aktivit{\"a}t klassifizieren. Am Beispiel des mykobakteriellen Enzyms InhA konnte gezeigt werden, dass derartige Modelle den klassischen Scoring-Funktionen im Bezug auf die Erkennung von aktiven Verbindungen deutlich {\"u}berlegen sind. Der RandomForest-Algorithmus wurde im n{\"a}chsten Schritt auch verwendet, um eine neue Scoring-Funktion zur Vorhersage von Bindungsaffinit{\"a}ten abzuleiten. Diese Funktion wurde unter dem Namen SFCscoreRF in das SFCscore-Programmpaket implementiert. Die Funktion unterschiedet sich in einigen wesentlichen Punkten von den urspr{\"u}nglichen SFCscore-Funktionen. Zum einen handelt es sich beim RF-Algorithmus um eine nicht-lineare Methode, die im Unterschied zu den klassischen Methoden, die zur Ableitung von Scoring-Funktionen eingesetzt werden, nicht von der Additivit{\"a}t der einzelnen Deskriptoren ausgeht. Der Algorithmus erlaubt außerdem die Verwendung aller verf{\"u}gbaren SFCscore-Deskriptoren, was eine deutlich umfassendere Repr{\"a}sentation von Protein-Ligand-Komplexen als Grundlage des Scorings erm{\"o}glicht. F{\"u}r die Ableitung von SFCscoreRF wurden insgesamt 1005 Komplexe im Trainingsdatensatz verwendet. Dieser Datensatz ist somit einer der gr{\"o}ßten, die bisher f{\"u}r die Ableitung einer empirischen Scoring-Funktion verwendet wurden. Die Evaluierung gegen zwei Benchmark-Datens{\"a}tze ergab deutlich bessere Vorhersagen von SFCscoreRF im Vergleich zu den urspr{\"u}nglichen SFCscore-Funktionen. Auch im internationalen Vergleich mit anderen Scoring-Funktion konnten f{\"u}r beide Datens{\"a}tze Spitzenwerte erreicht werden. Weitere ausgiebige Testungen im Rahmen einer Leave-Cluster-Out-Validierung und die Teilnahme am CSAR 2012 Benchmark Exercise ergaben, dass auch SFCscoreRF Performanceschwankungen bei der Anwendung an proteinspezifischen Datens{\"a}tzen zeigt - ein Ph{\"a}nomen, dass bei Scoring-Funktionen immer beobachtet wird. Die Analyse der CSAR 2012-Datens{\"a}tze ergab dar{\"u}ber hinaus wichtige Erkenntnisse im Bezug auf Vorhersage von gedockten Posen sowie {\"u}ber die statistische Signifikanz bei der Evaluierung von Scoring-Funktionen. Die Tatsache, dass empirische Scoring-Funktionen innerhalb eines bestimmten chemischen Raums trainiert wurden, ist ein wichtiger Faktor f{\"u}r die protein-abh{\"a}ngigen Leistungsschwankungen, die in dieser Arbeit beobachtet wurden. Verl{\"a}ssliche Vorhersagen sind nur innerhalb des kalibrierten chemischen Raums m{\"o}glich. In dieser Arbeit wurden verschiedene Ans{\"a}tze untersucht, mit denen sich diese ``Applicability Domain'' f{\"u}r die SFCscore-Funktionen definieren l{\"a}sst. Mit Hilfe von PCA-Analysen ist es gelungen die ``Applicability Domain'' einzelner Funktionen zu visualisieren. Zus{\"a}tzlich wurden eine Reihe numerischer Deskriptoren getestet, mit den die Vorhersageverl{\"a}sslichkeit basierend auf der ``Applicability Domain'' abgesch{\"a}tzt werden k{\"o}nnte. Die RF-Proximity hat sich hier als vielversprechender Ausgangspunkt f{\"u}r weitere Entwicklungen erwiesen. Der zweite Teil der Arbeit besch{\"a}ftigt sich mit der Entwicklung neuer Inhibitoren f{\"u}r das Chaperon Hsp70, welches eine vielversprechende Zielstruktur f{\"u}r die Therapie des multiplen Myeloms darstellt. Grundlage dieser Arbeiten war eine Leitstruktur, die in einer vorhergehenden Arbeit entdeckt wurde und die vermutlich an einer neuartigen Bindestelle in der Interface-Region zwischen den beiden großen Dom{\"a}nen von Hsp70 angreift. Die Weiterentwicklung und Optimierung dieser Leitstruktur, eines Tetrahydroisochinolinon-Derivats, stand zun{\"a}chst im Vordergrund. Anhand detaillierter Docking-Analysen wurde der potentielle Bindemodus der Leitstruktur in der Interfaceregion von Hsp70 untersucht. Basierend auf diesen Ergebnissen wurde eine Substanzbibliothek erstellt, die von Kooperationspartnern innerhalb der KFO 216 synthetisiert und biologisch getestet wurde. Die Struktur-Wirkungsbeziehungen, die sich aus diesen experimentellen Daten ableiten lassen, konnten teilweise gut mit den erstellten Docking-Modellen korreliert werden. Andere Effekte konnten anhand der Docking-Posen jedoch nicht erkl{\"a}rt werden. F{\"u}r die Entwicklung neuer Derivate ist deswegen eine umfassendere experimentelle Charakterisierung und darauf aufbauend eine Verfeinerung der Bindungsmodelle notwendig. Strukturell handelt es sich bei Hsp70 um ein Zwei-Dom{\"a}nen-System, dass verschiedene allostere Zust{\"a}nde einnehmen kann. Um die Auswirkungen der daraus folgenden Flexibilit{\"a}t auf die Stabilit{\"a}t der Struktur und die Bindung von Inhibitoren zu untersuchen, wurden molekulardynamische Simulationen f{\"u}r das Protein durchgef{\"u}hrt. Diese zeigen, dass das Protein tats{\"a}chlich eine {\"u}berdurchschnittlich hohe Flexibilit{\"a}t aufweist, die vor allem durch die relative Bewegung der beiden großen Dom{\"a}nen zueinander dominiert wird. Die Proteinkonformation die in der Kristallstruktur hscaz beobachtet wird, bleibt jedoch in ihrer Grundstruktur in allen vier durchgef{\"u}hrten Simulationen erhalten. Es konnten hingegen keine Hinweise daf{\"u}r gefunden werden, dass die Mutationen, welche die f{\"u}r die strukturbasierten Arbeiten verwendete Kristallstruktur im Vergleich zum Wildtyp aufweist, einen kritischen Einfluss auf die Gesamtstabilit{\"a}t des Systems haben. Obwohl die Interface-Region zwischen NBD und SBD also in allen Simulationen erhalten bleibt, wird die Konformation in diesem Bereich doch wesentlich durch die Dom{\"a}nenbewegung beeinflusst und variiert. Da dieser Proteinbereich den wahrscheinlichsten Angriffspunkt der Tetrahydroisochinolinone darstellt, wurde der Konformationsraum detailliert untersucht. Wie erwartet weist die Region eine nicht unerhebliche Flexibilit{\"a}t auf, welche zudem, im Sinne eines ``Induced-Fit''-Mechanismus, durch die Gegenwart eines Liganden (Apoptozol) stark beeinflusst wird. Es ist daher als sehr wahrscheinlich anzusehen, dass die Dynamik der Interface-Region auch einen wesentlichen Einfluss auf die Bindung der Tetrahydroisochinolinone hat. Molekuardynamische Berechnungen werden deswegen auch in zuk{\"u}nftige Arbeiten auf diesem Gebiet eine wichtige Rolle spielen. Die Analysen zeigen zudem, dass die Konformation der Interface-Region eng mit der Konformation des gesamten Proteins - vor allem im Bezug auf die relative Stellung von SBD und NBD zueinander - verkn{\"u}pft ist. Das untermauert die Hypothese, dass die Interface-Bindetasche einen Angriffspunkt f{\"u}r die Inhibtion des Proteins darstellt.}, subject = {Arzneimittelforschung}, language = {de} } @phdthesis{Winkler2015, author = {Winkler, Marco}, title = {On the Role of Triadic Substructures in Complex Networks}, publisher = {epubli GmbH}, address = {Berlin}, isbn = {978-3-7375-5654-5}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-116022}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2015}, abstract = {In the course of the growth of the Internet and due to increasing availability of data, over the last two decades, the field of network science has established itself as an own area of research. With quantitative scientists from computer science, mathematics, and physics working on datasets from biology, economics, sociology, political sciences, and many others, network science serves as a paradigm for interdisciplinary research. One of the major goals in network science is to unravel the relationship between topological graph structure and a network's function. As evidence suggests, systems from the same fields, i.e. with similar function, tend to exhibit similar structure. However, it is still vague whether a similar graph structure automatically implies likewise function. This dissertation aims at helping to bridge this gap, while particularly focusing on the role of triadic structures. After a general introduction to the main concepts of network science, existing work devoted to the relevance of triadic substructures is reviewed. A major challenge in modeling triadic structure is the fact that not all three-node subgraphs can be specified independently of each other, as pairs of nodes may participate in multiple of those triadic subgraphs. In order to overcome this obstacle, we suggest a novel class of generative network models based on so called Steiner triple systems. The latter are partitions of a graph's vertices into pair-disjoint triples (Steiner triples). Thus, the configurations on Steiner triples can be specified independently of each other without overdetermining the network's link structure. Subsequently, we investigate the most basic realization of this new class of models. We call it the triadic random graph model (TRGM). The TRGM is parametrized by a probability distribution over all possible triadic subgraph patterns. In order to generate a network instantiation of the model, for all Steiner triples in the system, a pattern is drawn from the distribution and adjusted randomly on the Steiner triple. We calculate the degree distribution of the TRGM analytically and find it to be similar to a Poissonian distribution. Furthermore, it is shown that TRGMs possess non-trivial triadic structure. We discover inevitable correlations in the abundance of certain triadic subgraph patterns which should be taken into account when attributing functional relevance to particular motifs - patterns which occur significantly more frequently than expected at random. Beyond, the strong impact of the probability distributions on the Steiner triples on the occurrence of triadic subgraphs over the whole network is demonstrated. This interdependence allows us to design ensembles of networks with predefined triadic substructure. Hence, TRGMs help to overcome the lack of generative models needed for assessing the relevance of triadic structure. We further investigate whether motifs occur homogeneously or heterogeneously distributed over a graph. Therefore, we study triadic subgraph structures in each node's neighborhood individually. In order to quantitatively measure structure from an individual node's perspective, we introduce an algorithm for node-specific pattern mining for both directed unsigned, and undirected signed networks. Analyzing real-world datasets, we find that there are networks in which motifs are distributed highly heterogeneously, bound to the proximity of only very few nodes. Moreover, we observe indication for the potential sensitivity of biological systems to a targeted removal of these critical vertices. In addition, we study whole graphs with respect to the homogeneity and homophily of their node-specific triadic structure. The former describes the similarity of subgraph distributions in the neighborhoods of individual vertices. The latter quantifies whether connected vertices are structurally more similar than non-connected ones. We discover these features to be characteristic for the networks' origins. Moreover, clustering the vertices of graphs regarding their triadic structure, we investigate structural groups in the neural network of C. elegans, the international airport-connection network, and the global network of diplomatic sentiments between countries. For the latter we find evidence for the instability of triangles considered socially unbalanced according to sociological theories. Finally, we utilize our TRGM to explore ensembles of networks with similar triadic substructure in terms of the evolution of dynamical processes acting on their nodes. Focusing on oscillators, coupled along the graphs' edges, we observe that certain triad motifs impose a clear signature on the systems' dynamics, even when embedded in a larger network structure.}, subject = {Netzwerk}, language = {en} } @phdthesis{Weigand2024, author = {Weigand, Matthias Johann}, title = {Fernerkundung und maschinelles Lernen zur Erfassung von urbanem Gr{\"u}n - Eine Analyse am Beispiel der Verteilungsgerechtigkeit in Deutschland}, doi = {10.25972/OPUS-34961}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-349610}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Gr{\"u}nfl{\"a}chen stellen einen der wichtigsten Umwelteinfl{\"u}sse in der Wohnumwelt der Menschen dar. Einerseits wirken sie sich positiv auf die physische und mentale Gesundheit der Menschen aus, andererseits k{\"o}nnen Gr{\"u}nfl{\"a}chen auch negative Wirkungen anderer Faktoren abmildern, wie beispielsweise die im Laufe des Klimawandels zunehmenden Hitzeereignisse. Dennoch sind Gr{\"u}nfl{\"a}chen nicht f{\"u}r die gesamte Bev{\"o}lkerung gleichermaßen zug{\"a}nglich. Bestehende Forschung im Kontext der Umweltgerechtigkeit (UG) konnte bereits aufzeigen, dass unterschiedliche sozio-{\"o}konomische und demographische Gruppen der deutschen Bev{\"o}lkerung unterschiedlichen Zugriff auf Gr{\"u}nfl{\"a}chen haben. An bestehenden Analysen von Umwelteinfl{\"u}ssen im Kontext der UG wird kritisiert, dass die Auswertung geographischer Daten h{\"a}ufig auf zu stark aggregiertem Level geschieht, wodurch lokal spezifische Expositionen nicht mehr genau abgebildet werden. Dies trifft insbesondere f{\"u}r großfl{\"a}chig angelegte Studien zu. So werden wichtige r{\"a}umliche Informationen verloren. Doch moderne Erdbeobachtungs- und Geodaten sind so detailliert wie nie und Methoden des maschinellen Lernens erm{\"o}glichen die effiziente Verarbeitung zur Ableitung h{\"o}herwertiger Informationen. Das {\"u}bergeordnete Ziel dieser Arbeit besteht darin, am Beispiel von Gr{\"u}nfl{\"a}chen in Deutschland methodische Schritte der systematischen Umwandlung umfassender Geodaten in relevante Geoinformationen f{\"u}r die großfl{\"a}chige und hochaufgel{\"o}ste Analyse von Umwelteigenschaften aufzuzeigen und durchzuf{\"u}hren. An der Schnittstelle der Disziplinen Fernerkundung, Geoinformatik, Sozialgeographie und Umweltgerechtigkeitsforschung sollen Potenziale moderner Methoden f{\"u}r die Verbesserung der r{\"a}umlichen und semantischen Aufl{\"o}sung von Geoinformationen erforscht werden. Hierf{\"u}r werden Methoden des maschinellen Lernens eingesetzt, um Landbedeckung und -nutzung auf nationaler Ebene zu erfassen. Diese Entwicklungen sollen dazu beitragen bestehende Datenl{\"u}cken zu schließen und Aufschluss {\"u}ber die Verteilungsgerechtigkeit von Gr{\"u}nfl{\"a}chen zu bieten. Diese Dissertation gliedert sich in drei konzeptionelle Teilschritte. Im ersten Studienteil werden Erdbeobachtungsdaten der Sentinel-2 Satelliten zur deutschlandweiten Klassifikation von Landbedeckungsinformationen verwendet. In Kombination mit punktuellen Referenzdaten der europaweiten Erfassung f{\"u}r Landbedeckungs- und Landnutzungsinformationen des Land Use and Coverage Area Frame Survey (LUCAS) wird ein maschinelles Lernverfahren trainiert. In diesem Kontext werden verschiedene Vorverarbeitungsschritte der LUCAS-Daten und deren Einfluss auf die Klassifikationsgenauigkeit beleuchtet. Das Klassifikationsverfahren ist in der Lage Landbedeckungsinformationen auch in komplexen urbanen Gebieten mit hoher Genauigkeit abzuleiten. Ein Ergebnis des Studienteils ist eine deutschlandweite Landbedeckungsklassifikation mit einer Gesamtgenauigkeit von 93,07 \%, welche im weiteren Verlauf der Arbeit genutzt wird, um gr{\"u}ne Landbedeckung (GLC) r{\"a}umlich zu quantifizieren. Im zweiten konzeptionellen Teil der Arbeit steht die differenzierte Betrachtung von Gr{\"u}nfl{\"a}chen anhand des Beispiels {\"o}ffentlicher Gr{\"u}nfl{\"a}chen (PGS), die h{\"a}ufig Gegenstand der UG-Forschung ist, im Vordergrund. Doch eine h{\"a}ufig verwendete Quelle f{\"u}r r{\"a}umliche Daten zu {\"o}ffentlichen Gr{\"u}nfl{\"a}chen, der European Urban Atlas (EUA), wird bisher nicht fl{\"a}chendeckend f{\"u}r Deutschland erhoben. Dieser Studienteil verfolgt einen datengetriebenen Ansatz, die Verf{\"u}gbarkeit von {\"o}ffentlichem Gr{\"u}n auf der r{\"a}umlichen Ebene von Nachbarschaften f{\"u}r ganz Deutschland zu ermitteln. Hierf{\"u}r dienen bereits vom EUA erfasste Gebiete als Referenz. Mithilfe einer Kombination von Erdbeobachtungsdaten und Informationen aus dem OpenStreetMap-Projekt wird ein Deep Learning -basiertes Fusionsnetzwerk erstellt, welche die verf{\"u}gbare Fl{\"a}che von {\"o}ffentlichem Gr{\"u}n quantifiziert. Das Ergebnis dieses Schrittes ist ein Modell, welches genutzt wird, um die Menge {\"o}ffentlicher Gr{\"u}nfl{\"a}chen in der Nachbarschaft zu sch{\"a}tzen (𝑅 2 = 0.952). Der dritte Studienteil greift die Ergebnisse der ersten beiden Studienteile auf und betrachtet die Verteilung von Gr{\"u}nfl{\"a}chen in Deutschland unter Hinzunahme von georeferenzierten Bev{\"o}lkerungsdaten. Diese exemplarische Analyse unterscheidet dabei Gr{\"u}nfl{\"a}chen nach zwei Typen: GLC und PGS. Zun{\"a}chst wird mithilfe deskriptiver Statistiken die generelle Gr{\"u}nfl{\"a}chenverteilung in der Bev{\"o}lkerung Deutschlands beleuchtet. Daraufhin wird die Verteilungsgerechtigkeit anhand g{\"a}ngiger Gerechtigkeitsmetriken bestimmt. Abschließend werden die Zusammenh{\"a}nge zwischen der demographischen Komposition der Nachbarschaft und der verf{\"u}gbaren Menge von Gr{\"u}nfl{\"a}chen anhand dreier exemplarischer soziodemographischer Gesellschaftsgruppen untersucht. Die Analyse zeigt starke Unterschiede der Verf{\"u}gbarkeit von PGS zwischen st{\"a}dtischen und l{\"a}ndlichen Gebieten. Ein h{\"o}herer Prozentsatz der Stadtbev{\"o}lkerung hat Zugriff das Mindestmaß von PGS gemessen an der Vorgabe der Weltgesundheitsorganisation. Die Ergebnisse zeigen auch einen deutlichen Unterschied bez{\"u}glich der Verteilungsgerechtigkeit zwischen GLC und PGS und verdeutlichen die Relevanz der Unterscheidung von Gr{\"u}nfl{\"a}chentypen f{\"u}r derartige Untersuchungen. Die abschließende Betrachtung verschiedener Bev{\"o}lkerungsgruppen arbeitet Unterschiede auf soziodemographischer Ebene auf. In der Zusammenschau demonstriert diese Arbeit wie moderne Geodaten und Methoden des maschinellen Lernens genutzt werden k{\"o}nnen bisherige Limitierungen r{\"a}umlicher Datens{\"a}tze zu {\"u}berwinden. Am Beispiel von Gr{\"u}nfl{\"a}chen in der Wohnumgebung der Bev{\"o}lkerung Deutschlands wird gezeigt, dass landesweite Analysen zur Umweltgerechtigkeit durch hochaufgel{\"o}ste und lokal feingliedrige geographische Informationen bereichert werden k{\"o}nnen. Diese Arbeit verdeutlicht, wie die Methoden der Erdbeobachtung und Geoinformatik einen wichtigen Beitrag leisten k{\"o}nnen, die Ungleichheit der Wohnumwelt der Menschen zu identifizieren und schlussendlich den nachhaltigen Siedlungsbau in Form von objektiven Informationen zu unterst{\"u}tzen und {\"u}berwachen.}, subject = {Geografie}, language = {de} } @phdthesis{Taigel2020, author = {Taigel, Fabian Michael}, title = {Data-driven Operations Management: From Predictive to Prescriptive Analytics}, doi = {10.25972/OPUS-20651}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-206514}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2020}, abstract = {Autonomous cars and artificial intelligence that beats humans in Jeopardy or Go are glamorous examples of the so-called Second Machine Age that involves the automation of cognitive tasks [Brynjolfsson and McAfee, 2014]. However, the larger impact in terms of increasing the efficiency of industry and the productivity of society might come from computers that improve or take over business decisions by using large amounts of available data. This impact may even exceed that of the First Machine Age, the industrial revolution that started with James Watt's invention of an efficient steam engine in the late eighteenth century. Indeed, the prevalent phrase that calls data "the new oil" indicates the growing awareness of data's importance. However, many companies, especially those in the manufacturing and traditional service industries, still struggle to increase productivity using the vast amounts of data [for Economic Co-operation and Development, 2018]. One reason for this struggle is that companies stick with a traditional way of using data for decision support in operations management that is not well suited to automated decision-making. In traditional inventory and capacity management, some data - typically just historical demand data - is used to estimate a model that makes predictions about uncertain planning parameters, such as customer demand. The planner then has two tasks: to adjust the prediction with respect to additional information that was not part of the data but still might influence demand and to take the remaining uncertainty into account and determine a safety buffer based on the underage and overage costs. In the best case, the planner determines the safety buffer based on an optimization model that takes the costs and the distribution of historical forecast errors into account; however, these decisions are usually based on a planner's experience and intuition, rather than on solid data analysis. This two-step approach is referred to as separated estimation and optimization (SEO). With SEO, using more data and better models for making the predictions would improve only the first step, which would still improve decisions but would not automize (and, hence, revolutionize) decision-making. Using SEO is like using a stronger horse to pull the plow: one still has to walk behind. The real potential for increasing productivity lies in moving from predictive to prescriptive approaches, that is, from the two-step SEO approach, which uses predictive models in the estimation step, to a prescriptive approach, which integrates the optimization problem with the estimation of a model that then provides a direct functional relationship between the data and the decision. Following Akcay et al. [2011], we refer to this integrated approach as joint estimation-optimization (JEO). JEO approaches prescribe decisions, so they can automate the decision-making process. Just as the steam engine replaced manual work, JEO approaches replace cognitive work. The overarching objective of this dissertation is to analyze, develop, and evaluate new ways for how data can be used in making planning decisions in operations management to unlock the potential for increasing productivity. In doing so, the thesis comprises five self-contained research articles that forge the bridge from predictive to prescriptive approaches. While the first article focuses on how sensitive data like condition data from machinery can be used to make predictions of spare-parts demand, the remaining articles introduce, analyze, and discuss prescriptive approaches to inventory and capacity management. All five articles consider approach that use machine learning and data in innovative ways to improve current approaches to solving inventory or capacity management problems. The articles show that, by moving from predictive to prescriptive approaches, we can improve data-driven operations management in two ways: by making decisions more accurate and by automating decision-making. Thus, this dissertation provides examples of how digitization and the Second Machine Age can change decision-making in companies to increase efficiency and productivity.}, subject = {Maschinelles Lernen}, language = {en} } @phdthesis{Steininger2023, author = {Steininger, Michael}, title = {Deep Learning for Geospatial Environmental Regression}, doi = {10.25972/OPUS-31312}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-313121}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2023}, abstract = {Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues' substantial consequences evoked strong efforts towards assessing the state of our environment. Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model's generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location's land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model's output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used. Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth's surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks. In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks. This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi-supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment.}, subject = {Deep learning}, language = {en} } @phdthesis{Stein2019, author = {Stein, Nikolai Werner}, title = {Advanced Analytics in Operations Management and Information Systems: Methods and Applications}, doi = {10.25972/OPUS-19266}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-192668}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2019}, abstract = {Die digitale Transformation der Gesellschaft birgt enorme Potenziale f{\"u}r Unternehmen aus allen Sektoren. Diese verf{\"u}gen aufgrund neuer Datenquellen, wachsender Rechenleistung und verbesserter Konnektivit{\"a}t {\"u}ber rasant steigende Datenmengen. Um im digitalen Wandel zu bestehen und Wettbewerbsvorteile in Bezug auf Effizienz und Effektivit{\"a}t heben zu k{\"o}nnen m{\"u}ssen Unternehmen die verf{\"u}gbaren Daten nutzen und datengetriebene Entscheidungsprozesse etablieren. Dennoch verwendet die Mehrheit der Firmen lediglich Tools aus dem Bereich „descriptive analytics" und nur ein kleiner Teil der Unternehmen macht bereits heute von den M{\"o}glichkeiten der „predictive analytics" und „prescriptive analytics" Gebrauch. Ziel dieser Dissertation, die aus vier inhaltlich abgeschlossenen Teilen besteht, ist es, Einsatzm{\"o}glichkeiten von „prescriptive analytics" zu identifizieren. Da pr{\"a}diktive Modelle eine wesentliche Voraussetzung f{\"u}r „prescriptive analytics" sind, thematisieren die ersten beiden Teile dieser Arbeit Verfahren aus dem Bereich „predictive analytics." Ausgehend von Verfahren des maschinellen Lernens wird zun{\"a}chst die Entwicklung eines pr{\"a}diktiven Modells am Beispiel der Kapazit{\"a}ts- und Personalplanung bei einem IT-Beratungsunternehmen veranschaulicht. Im Anschluss wird eine Toolbox f{\"u}r Data Science Anwendungen entwickelt. Diese stellt Entscheidungstr{\"a}gern Richtlinien und bew{\"a}hrte Verfahren f{\"u}r die Modellierung, das Feature Engineering und die Modellinterpretation zur Verf{\"u}gung. Der Einsatz der Toolbox wird am Beispiel von Daten eines großen deutschen Industrieunternehmens veranschaulicht. Verbesserten Prognosen, die von leistungsf{\"a}higen Vorhersagemodellen bereitgestellt werden, erlauben es Entscheidungstr{\"a}gern in einigen Situationen bessere Entscheidungen zu treffen und auf diese Weise einen Mehrwert zu generieren. In vielen komplexen Entscheidungssituationen ist die Ableitungen von besseren Politiken aus zur Verf{\"u}gung stehenden Prognosen jedoch oft nicht trivial und erfordert die Entwicklung neuer Planungsalgorithmen. Aus diesem Grund fokussieren sich die letzten beiden Teile dieser Arbeit auf Verfahren aus dem Bereich „prescriptive analytics". Hierzu wird zun{\"a}chst analysiert, wie die Vorhersagen pr{\"a}diktiver Modelle in pr{\"a}skriptive Politiken zur L{\"o}sung eines „Optimal Searcher Path Problem" {\"u}bersetzt werden k{\"o}nnen. Trotz beeindruckender Fortschritte in der Forschung im Bereich k{\"u}nstlicher Intelligenz sind die Vorhersagen pr{\"a}diktiver Modelle auch heute noch mit einer gewissen Unsicherheit behaftet. Der letzte Teil dieser Arbeit schl{\"a}gt einen pr{\"a}skriptiven Ansatz vor, der diese Unsicherheit ber{\"u}cksichtigt. Insbesondere wird ein datengetriebenes Verfahren f{\"u}r die Einsatzplanung im Außendienst entwickelt. Dieser Ansatz integriert Vorhersagen bez{\"u}glich der Erfolgswahrscheinlichkeiten und die Modellqualit{\"a}t des entsprechenden Vorhersagemodells in ein „Team Orienteering Problem."}, subject = {Operations Management}, language = {en} } @phdthesis{Pfitzner2019, author = {Pfitzner, Christian}, title = {Visual Human Body Weight Estimation with Focus on Clinical Applications}, isbn = {978-3-945459-27-0 (online)}, doi = {10.25972/OPUS-17484}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-174842}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2019}, abstract = {It is the aim of this thesis to present a visual body weight estimation, which is suitable for medical applications. A typical scenario where the estimation of the body weight is essential, is the emergency treatment of stroke patients: In case of an ischemic stroke, the patient has to receive a body weight adapted drug, to solve a blood clot in a vessel. The accuracy of the estimated weight influences the outcome of the therapy directly. However, the treatment has to start as early as possible after the arrival at a trauma room, to provide sufficient treatment. Weighing a patient takes time, and the patient has to be moved. Furthermore, patients are often not able to communicate a value for their body weight due to their stroke symptoms. Therefore, it is state of the art that physicians guess the body weight. A patient receiving a too low dose has an increased risk that the blood clot does not dissolve and brain tissue is permanently damaged. Today, about one-third gets an insufficient dosage. In contrast to that, an overdose can cause bleedings and further complications. Physicians are aware of this issue, but a reliable alternative is missing. The thesis presents state-of-the-art principles and devices for the measurement and estimation of body weight in the context of medical applications. While scales are common and available at a hospital, the process of weighing takes too long and can hardly be integrated into the process of stroke treatment. Sensor systems and algorithms are presented in the section for related work and provide an overview of different approaches. The here presented system -- called Libra3D -- consists of a computer installed in a real trauma room, as well as visual sensors integrated into the ceiling. For the estimation of the body weight, the patient is on a stretcher which is placed in the field of view of the sensors. The three sensors -- two RGB-D and a thermal camera -- are calibrated intrinsically and extrinsically. Also, algorithms for sensor fusion are presented to align the data from all sensors which is the base for a reliable segmentation of the patient. A combination of state-of-the-art image and point cloud algorithms is used to localize the patient on the stretcher. The challenges in the scenario with the patient on the bed is the dynamic environment, including other people or medical devices in the field of view. After the successful segmentation, a set of hand-crafted features is extracted from the patient's point cloud. These features rely on geometric and statistical values and provide a robust input to a subsequent machine learning approach. The final estimation is done with a previously trained artificial neural network. The experiment section offers different configurations of the previously extracted feature vector. Additionally, the here presented approach is compared to state-of-the-art methods; the patient's own assessment, the physician's guess, and an anthropometric estimation. Besides the patient's own estimation, Libra3D outperforms all state-of-the-art estimation methods: 95 percent of all patients are estimated with a relative error of less than 10 percent to ground truth body weight. It takes only a minimal amount of time for the measurement, and the approach can easily be integrated into the treatment of stroke patients, while physicians are not hindered. Furthermore, the section for experiments demonstrates two additional applications: The extracted features can also be used to estimate the body weight of people standing, or even walking in front of a 3D camera. Also, it is possible to determine or classify the BMI of a subject on a stretcher. A potential application for this approach is the reduction of the radiation dose of patients being exposed to X-rays during a CT examination. During the time of this thesis, several data sets were recorded. These data sets contain the ground truth body weight, as well as the data from the sensors. They are available for the collaboration in the field of body weight estimation for medical applications.}, subject = {Punktwolke}, language = {en} } @phdthesis{Oberdorf2022, author = {Oberdorf, Felix}, title = {Design and Evaluation of Data-Driven Enterprise Process Monitoring Systems}, doi = {10.25972/OPUS-29853}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-298531}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2022}, abstract = {Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios. To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming. To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network. Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement.}, subject = {Operations Management}, language = {en} } @phdthesis{Notz2021, author = {Notz, Pascal Markus}, title = {Prescriptive Analytics for Data-driven Capacity Management}, doi = {10.25972/OPUS-24042}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-240423}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2021}, abstract = {Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM). First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems. The first article, "Prescriptive Analytics for Flexible Capacity Management", develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters. The second article, "Prescriptive Analytics for a Multi-Shift Staffing Problem", uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the "worlds" of queuing theory and prescriptive analytics. The third article, "Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management" proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions. The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation.}, subject = {Maschinelles Lernen}, language = {en} } @phdthesis{Niebler2019, author = {Niebler, Thomas}, title = {Extracting and Learning Semantics from Social Web Data}, doi = {10.25972/OPUS-17866}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-178666}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2019}, abstract = {Making machines understand natural language is a dream of mankind that existed since a very long time. Early attempts at programming machines to converse with humans in a supposedly intelligent way with humans relied on phrase lists and simple keyword matching. However, such approaches cannot provide semantically adequate answers, as they do not consider the specific meaning of the conversation. Thus, if we want to enable machines to actually understand language, we need to be able to access semantically relevant background knowledge. For this, it is possible to query so-called ontologies, which are large networks containing knowledge about real-world entities and their semantic relations. However, creating such ontologies is a tedious task, as often extensive expert knowledge is required. Thus, we need to find ways to automatically construct and update ontologies that fit human intuition of semantics and semantic relations. More specifically, we need to determine semantic entities and find relations between them. While this is usually done on large corpora of unstructured text, previous work has shown that we can at least facilitate the first issue of extracting entities by considering special data such as tagging data or human navigational paths. Here, we do not need to detect the actual semantic entities, as they are already provided because of the way those data are collected. Thus we can mainly focus on the problem of assessing the degree of semantic relatedness between tags or web pages. However, there exist several issues which need to be overcome, if we want to approximate human intuition of semantic relatedness. For this, it is necessary to represent words and concepts in a way that allows easy and highly precise semantic characterization. This also largely depends on the quality of data from which these representations are constructed. In this thesis, we extract semantic information from both tagging data created by users of social tagging systems and human navigation data in different semantic-driven social web systems. Our main goal is to construct high quality and robust vector representations of words which can the be used to measure the relatedness of semantic concepts. First, we show that navigation in the social media systems Wikipedia and BibSonomy is driven by a semantic component. After this, we discuss and extend methods to model the semantic information in tagging data as low-dimensional vectors. Furthermore, we show that tagging pragmatics influences different facets of tagging semantics. We then investigate the usefulness of human navigational paths in several different settings on Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose a metric-learning based algorithm in adapt pre-trained word embeddings to datasets containing human judgment of semantic relatedness. This work contributes to the field of studying semantic relatedness between words by proposing methods to extract semantic relatedness from web navigation, learn highquality and low-dimensional word representations from tagging data, and to learn semantic relatedness from any kind of vector representation by exploiting human feedback. Applications first and foremest lie in ontology learning for the Semantic Web, but also semantic search or query expansion.}, subject = {Semantik}, language = {en} } @unpublished{Nassourou2011, author = {Nassourou, Mohamadou}, title = {A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-54712}, year = {2011}, abstract = {Computationally categorizing Quran's chapters has been mainly confined to the determination of chapters' revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters' dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran's chapters based on their lexical contents.}, subject = {Text Mining}, language = {en} } @unpublished{Nassourou2011, author = {Nassourou, Mohamadou}, title = {Using Machine Learning Algorithms for Categorizing Quranic Chaptersby Major Phases of Prophet Mohammad's Messengership}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-66862}, year = {2011}, abstract = {This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad's messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and na{\"i}ve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad's life. The major phases of the Prophet's life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad's messengership.}, subject = {Koran}, language = {en} } @phdthesis{Nadernezhad2024, author = {Nadernezhad, Ali}, title = {Engineering approaches in biofabrication of vascularized structures}, doi = {10.25972/OPUS-34589}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-345892}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Biofabrication technologies must address numerous parameters and conditions to reconstruct tissue complexity in vitro. A critical challenge is vascularization, especially for large constructs exceeding diffusion limits. This requires the creation of artificial vascular structures, a task demanding the convergence and integration of multiple engineering approaches. This doctoral dissertation aims to achieve two primary objectives: firstly, to implement and refine engineering methods for creating artificial microvascular structures using Melt Electrowriting (MEW)-assisted sacrificial templating, and secondly, to deepen the understanding of the critical factors influencing the printability of bioink formulations in 3D extrusion bioprinting. In the first part of this dissertation, two innovative sacrificial templating techniques using MEW are explored. Utilizing a carbohydrate glass as a fugitive material, a pioneering advancement in the processing of sugars with MEW with a resolution under 100 microns was made. Furthermore, by introducing the "print-and-fuse" strategy as a groundbreaking method, biomimetic branching microchannels embedded in hydrogel matrices were fabricated, which can then be endothelialized to mirror in vivo vascular conditions. The second part of the dissertation explores extrusion bioprinting. By introducing a simple binary bioink formulation, the correlation between physical properties and printability was showcased. In the next step, employing state-of-the-art machine-learning approaches revealed a deeper understanding of the correlations between bioink properties and printability in an extended library of hydrogel formulations. This dissertation offers in-depth insights into two key biofabrication technologies. Future work could merge these into hybrid methods for the fabrication of vascularized constructs, combining MEW's precision with fine-tuned bioink properties in automated extrusion bioprinting.}, subject = {3D-Druck}, language = {en} } @phdthesis{Meller2020, author = {Meller, Jan Maximilian}, title = {Data-driven Operations Management: Combining Machine Learning and Optimization for Improved Decision-making}, doi = {10.25972/OPUS-20604}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-206049}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2020}, abstract = {This dissertation consists of three independent, self-contained research papers that investigate how state-of-the-art machine learning algorithms can be used in combination with operations management models to consider high dimensional data for improved planning decisions. More specifically, the thesis focuses on the question concerning how the underlying decision support models change structurally and how those changes affect the resulting decision quality. Over the past years, the volume of globally stored data has experienced tremendous growth. Rising market penetration of sensor-equipped production machinery, advanced ways to track user behavior, and the ongoing use of social media lead to large amounts of data on production processes, user behavior, and interactions, as well as condition information about technical gear, all of which can provide valuable information to companies in planning their operations. In the past, two generic concepts have emerged to accomplish this. The first concept, separated estimation and optimization (SEO), uses data to forecast the central inputs (i.e., the demand) of a decision support model. The forecast and a distribution of forecast errors are then used in a subsequent stochastic optimization model to determine optimal decisions. In contrast to this sequential approach, the second generic concept, joint estimation-optimization (JEO), combines the forecasting and optimization step into a single optimization problem. Following this approach, powerful machine learning techniques are employed to approximate highly complex functional relationships and hence relate feature data directly to optimal decisions. The first article, "Machine learning for inventory management: Analyzing two concepts to get from data to decisions", chapter 2, examines performance differences between implementations of these concepts in a single-period Newsvendor setting. The paper first proposes a novel JEO implementation based on the random forest algorithm to learn optimal decision rules directly from a data set that contains historical sales and auxiliary data. Going forward, we analyze structural properties that lead to these performance differences. Our results show that the JEO implementation achieves significant cost improvements over the SEO approach. These differences are strongly driven by the decision problem's cost structure and the amount and structure of the remaining forecast uncertainty. The second article, "Prescriptive call center staffing", chapter 3, applies the logic of integrating data analysis and optimization to a more complex problem class, an employee staffing problem in a call center. We introduce a novel approach to applying the JEO concept that augments historical call volume data with features like the day of the week, the beginning of the month, and national holiday periods. We employ a regression tree to learn the ex-post optimal staffing levels based on similarity structures in the data and then generalize these insights to determine future staffing levels. This approach, relying on only few modeling assumptions, significantly outperforms a state-of-the-art benchmark that uses considerably more model structure and assumptions. The third article, "Data-driven sales force scheduling", chapter 4, is motivated by the problem of how a company should allocate limited sales resources. We propose a novel approach based on the SEO concept that involves a machine learning model to predict the probability of winning a specific project. We develop a methodology that uses this prediction model to estimate the "uplift", that is, the incremental value of an additional visit to a particular customer location. To account for the remaining uncertainty at the subsequent optimization stage, we adapt the decision support model in such a way that it can control for the level of trust in the predicted uplifts. This novel policy dominates both a benchmark that relies completely on the uplift information and a robust benchmark that optimizes the sum of potential profits while neglecting any uplift information. The results of this thesis show that decision support models in operations management can be transformed fundamentally by considering additional data and benefit through better decision quality respectively lower mismatch costs. The way how machine learning algorithms can be integrated into these decision support models depends on the complexity and the context of the underlying decision problem. In summary, this dissertation provides an analysis based on three different, specific application scenarios that serve as a foundation for further analyses of employing machine learning for decision support in operations management.}, subject = {Operations Management}, language = {en} } @phdthesis{Marquardt2023, author = {Marquardt, Andr{\´e}}, title = {Machine-Learning-Based Identification of Tumor Entities, Tumor Subgroups, and Therapy Options}, doi = {10.25972/OPUS-32954}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-329548}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2023}, abstract = {Molecular genetic analyses, such as mutation analyses, are becoming increasingly important in the tumor field, especially in the context of therapy stratification. The identification of the underlying tumor entity is crucial, but can sometimes be difficult, for example in the case of metastases or the so-called Cancer of Unknown Primary (CUP) syndrome. In recent years, methylome and transcriptome utilizing machine learning (ML) approaches have been developed to enable fast and reliable tumor and tumor subtype identification. However, so far only methylome analysis have become widely used in routine diagnostics. The present work addresses the utility of publicly available RNA-sequencing data to determine the underlying tumor entity, possible subgroups, and potential therapy options. Identification of these by ML - in particular random forest (RF) models - was the first task. The results with test accuracies of up to 99\% provided new, previously unknown insights into the trained models and the corresponding entity prediction. Reducing the input data to the top 100 mRNA transcripts resulted in a minimal loss of prediction quality and could potentially enable application in clinical or real-world settings. By introducing the ratios of these top 100 genes to each other as a new database for RF models, a novel method was developed enabling the use of trained RF models on data from other sources. Further analysis of the transcriptomic differences of metastatic samples by visual clustering showed that there were no differences specific for the site of metastasis. Similarly, no distinct clusters were detectable when investigating primary tumors and metastases of cutaneous skin melanoma (SKCM). Subsequently, more than half of the validation datasets had a prediction accuracy of at least 80\%, with many datasets even achieving a prediction accuracy of - or close to - 100\%. To investigate the applicability of the used methods for subgroup identification, the TCGA-KIPAN dataset, consisting of the three major kidney cancer subgroups, was used. The results revealed a new, previously unknown subgroup consisting of all histopathological groups with clinically relevant characteristics, such as significantly different survival. Based on significant differences in gene expression, potential therapeutic options of the identified subgroup could be proposed. Concludingly, in exploring the potential applicability of RNA-sequencing data as a basis for therapy prediction, it was shown that this type of data is suitable to predict entities as well as subgroups with high accuracy. Clinical relevance was also demonstrated for a novel subgroup in renal cell carcinoma. The reduction of the number of genes required for entity prediction to 100 genes, enables panel sequencing and thus demonstrates potential applicability in a real-life setting.}, subject = {Maschinelles Lernen}, language = {en} } @phdthesis{Lenard2023, author = {Lenard, Chris}, title = {Ans{\"a}tze zur informatik-gest{\"u}tzten Vorherbestimmung der Behandlungszeit anhand von Befundungsdaten bei Kontroll- und Schmerzf{\"a}llen in der Zahnarztpraxis}, doi = {10.25972/OPUS-32034}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-320348}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2023}, abstract = {Diese retrospektive Studie untersuchte Patientenakten des elektronischen Karteikartensystems einer privaten Zahnarztpraxis von Patienten, welche zur Kontrolluntersuchung oder wegen Schmerzen vorstellig waren. Ziel der Studie war das Entwickeln von Methoden zur Vorhersage der Behandlungszeit f{\"u}r zuk{\"u}nftige Termine anhand verschiedener Patienteninformationen. Mittels statistischer deskriptiver Auswertung wurden die erfassten Daten untersucht und Korrelationen in Hinblick auf die Behandlungsdauer zwischen den verschiedenen Attributen hergestellt. Es wurden verschiedene Methoden zur Vorherbestimmung der Behandlungsdauer aufgestellt und auf ihr Optimierungspotential getestet. Die Methode mit dem h{\"o}chsten Optimierungswert war ein Ansatz maschinellen Lernens. Der entworfene Algorithmus berechnete Behandlungszeiten der Testgruppe anhand eines Neuronalen Netzes, welches durch Trainieren mit den Daten der Untersuchungsgruppe erstellt wurde.}, subject = {Maschinelles Lernen}, language = {de} } @phdthesis{Krenzer2023, author = {Krenzer, Adrian}, title = {Machine learning to support physicians in endoscopic examinations with a focus on automatic polyp detection in images and videos}, doi = {10.25972/OPUS-31911}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-319119}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2023}, abstract = {Deep learning enables enormous progress in many computer vision-related tasks. Artificial Intel- ligence (AI) steadily yields new state-of-the-art results in the field of detection and classification. Thereby AI performance equals or exceeds human performance. Those achievements impacted many domains, including medical applications. One particular field of medical applications is gastroenterology. In gastroenterology, machine learning algorithms are used to assist examiners during interventions. One of the most critical concerns for gastroenterologists is the development of Colorectal Cancer (CRC), which is one of the leading causes of cancer-related deaths worldwide. Detecting polyps in screening colonoscopies is the essential procedure to prevent CRC. Thereby, the gastroenterologist uses an endoscope to screen the whole colon to find polyps during a colonoscopy. Polyps are mucosal growths that can vary in severity. This thesis supports gastroenterologists in their examinations with automated detection and clas- sification systems for polyps. The main contribution is a real-time polyp detection system. This system is ready to be installed in any gastroenterology practice worldwide using open-source soft- ware. The system achieves state-of-the-art detection results and is currently evaluated in a clinical trial in four different centers in Germany. The thesis presents two additional key contributions: One is a polyp detection system with ex- tended vision tested in an animal trial. Polyps often hide behind folds or in uninvestigated areas. Therefore, the polyp detection system with extended vision uses an endoscope assisted by two additional cameras to see behind those folds. If a polyp is detected, the endoscopist receives a vi- sual signal. While the detection system handles the additional two camera inputs, the endoscopist focuses on the main camera as usual. The second one are two polyp classification models, one for the classification based on shape (Paris) and the other on surface and texture (NBI International Colorectal Endoscopic (NICE) classification). Both classifications help the endoscopist with the treatment of and the decisions about the detected polyp. The key algorithms of the thesis achieve state-of-the-art performance. Outstandingly, the polyp detection system tested on a highly demanding video data set shows an F1 score of 90.25 \% while working in real-time. The results exceed all real-time systems in the literature. Furthermore, the first preliminary results of the clinical trial of the polyp detection system suggest a high Adenoma Detection Rate (ADR). In the preliminary study, all polyps were detected by the polyp detection system, and the system achieved a high usability score of 96.3 (max 100). The Paris classification model achieved an F1 score of 89.35 \% which is state-of-the-art. The NICE classification model achieved an F1 score of 81.13 \%. Furthermore, a large data set for polyp detection and classification was created during this thesis. Therefore a fast and robust annotation system called Fast Colonoscopy Annotation Tool (FastCAT) was developed. The system simplifies the annotation process for gastroenterologists. Thereby the i gastroenterologists only annotate key parts of the endoscopic video. Afterward, those video parts are pre-labeled by a polyp detection AI to speed up the process. After the AI has pre-labeled the frames, non-experts correct and finish the annotation. This annotation process is fast and ensures high quality. FastCAT reduces the overall workload of the gastroenterologist on average by a factor of 20 compared to an open-source state-of-art annotation tool.}, subject = {Deep Learning}, language = {en} } @phdthesis{Kreikenbohm2019, author = {Kreikenbohm, Annika Franziska Eleonore}, title = {Classifying the high-energy sky with spectral timing methods}, doi = {10.25972/OPUS-19205}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-192054}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2019}, abstract = {Active galactic nuclei (AGN) are among the brightest and most frequent sources on the extragalactic X-ray and gamma-ray sky. Their central supermassive blackhole generates an enormous luminostiy through accretion of the surrounding gas. A few AGN harbor highly collimated, powerful jets in which are observed across the entire electromagnetic spectrum. If their jet axis is seen in a small angle to our line-of-sight (these objects are then called blazars) jet emission can outshine any other emission component from the system. Synchrotron emission from electrons and positrons clearly prove the existence of a relativistic leptonic component in the jet plasma. But until today, it is still an open question whether heavier particles, especially protons, are accelerated as well. If this is the case, AGN would be prime candidates for extragalactic PeV neutrino sources that are observed on Earth. Characteristic signatures for protons can be hidden in the variable high-energy emission of these objects. In this thesis I investigated the broadband emission, particularly the high-energy X-ray and gamma-ray emission of jetted AGN to address open questions regarding the particle acceleration and particle content of AGN jets, or the evolutionary state of the AGN itself. For this purpose I analyzed various multiwavelength observations from optical to gamma-rays over a period of time using a combination of state-of-the-art spectroscopy and timing analysis. By nature, AGN are highly variable. Time-resolved spectral analysis provided a new dynamic view of these sources which helped to determine distinct emission processes that are difficult to disentangle from spectral or timing methods alone. Firstly, this thesis tackles the problem of source classification in order to facilitate the search for interesting sources in large data archives and characterize new transient sources. I use spectral and timing analysis methods and supervised machine learning algorithms to design an automated source classification pipeline. The test and training sample were based on the third XMM-Newton point source catalog (3XMM-DR6). The set of input features for the machine learning algorithm was derived from an automated spectral modeling of all sources in the 3XMM-DR6, summing up to 137200 individual detections. The spectral features were complemented by results of a basic timing analysis as well as multiwavelength information provided by catalog cross-matches. The training of the algorithm and application to a test sample showed that the definition of the training sample was crucial: Despite oversampling minority source types with synthetic data to balance out the training sample, the algorithm preferably predicted majority source types for unclassified objects. In general, the training process showed that the combination of spectral, timing and multiwavelength features performed best with the lowest misclassification rate of \\sim2.4\\\%. The methods of time-resolved spectroscopy was then used in two studies to investigate the properties of two individual AGN, Mrk 421 and PKS 2004-447, in detail. Both objects belong to the class of gamma-ray emitting AGN. A very elusive sub-class are gamma-ray emitting Narrow Line Seyfert 1 (gNLS1) galaxies. These sources have been discovered as gamma-ray sources only recently in 2010 and a connection to young radio galaxies especially compact steep spectrum (CSS) radio sources has been proposed. The only gNLS1 on the Southern Hemisphere so far is PKS2004-447 which lies at the lower end of the luminosity distribution of gNLS1. The source is part of the TANAMI VLBI program and is regularly monitored at radio frequencies. In this thesis, I presented and analyzed data from a dedicated multiwavelength campaign of PKS 2004-447 which I and my collaborators performed during 2012 and which was complemented by individual observations between 2013 and 2016. I focussed on the detailed analysis of the X-ray emission and a first analysis of its broadband spectrum from radio to gamma-rays. Thanks to the dynamic SED I could show that earlier studies misinterpreted the optical spectrum of the source which had led to an underestimation of the high-energy emission and had ignited a discussion on the source class. I show that the overall spectral properties are consistent with dominating jet emission comprised of synchrotron radiation and inverse Compton scattering from accelerated leptons. The broadband emission is very similar to typical examples of a certain type of blazars (flat-spectrum radio quasars) and does not present any unusual properties in comparison. Interestingly, the VLBI data showed a compact jet structure and a steep radio spectrum consistent with a compact steep spectrum source. This classified PKS 2004-447 as a young radio galaxy, in which the jet is still developing. The investigation of Mrk 421 introduced the blazar monitoring program which I and collaborator have started in 2014. By observing a blazar simultaneously from optical, X-ray and gamma-ray bands during a VHE outbursts, the program aims at providing extraordinary data sets to allow for the generation of a series of dynamical SEDs of high spectral and temporal resolution. The program makes use of the dense VHE monitoring by the FACT telescope. So far, there are three sources in our sample that we have been monitoring since 2014. I presented the data and the first analysis of one of the brightest and most variable blazar, Mrk 421, which had a moderate outbreak in 2015 and triggered our program for the first time. With spectral timing analysis, I confirmed a tight correlation between the X-ray and TeV energy bands, which indicated that these jet emission components are causally connected. I discovered that the variations of the optical band were both correlated and anti-correlated with the high-energy emission, which suggested an independent emission component. Furthermore, the dynamic SEDs showed two different flaring behaviors, which differed in the presence or lack of a peak shift of the low-energy emission hump. These results further supported the hypothesis that more than one emission region contributed to the broadband emission of Mrk 421 during the observations. Overall,the studies presented in this thesis demonstrated that time-resolved spectroscopy is a powerful tool to classify both source types and emission processes of astronomical objects, especially relativistic jets in AGN, and thus provide a deeper understanding and new insights of their physics and properties.}, subject = {Astronomie}, language = {en} } @phdthesis{Kobs2024, author = {Kobs, Konstantin}, title = {Think outside the Black Box: Model-Agnostic Deep Learning with Domain Knowledge}, doi = {10.25972/OPUS-34968}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-349689}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Deep Learning (DL) models are trained on a downstream task by feeding (potentially preprocessed) input data through a trainable Neural Network (NN) and updating its parameters to minimize the loss function between the predicted and the desired output. While this general framework has mainly remained unchanged over the years, the architectures of the trainable models have greatly evolved. Even though it is undoubtedly important to choose the right architecture, we argue that it is also beneficial to develop methods that address other components of the training process. We hypothesize that utilizing domain knowledge can be helpful to improve DL models in terms of performance and/or efficiency. Such model-agnostic methods can be applied to any existing or future architecture. Furthermore, the black box nature of DL models motivates the development of techniques to understand their inner workings. Considering the rapid advancement of DL architectures, it is again crucial to develop model-agnostic methods. In this thesis, we explore six principles that incorporate domain knowledge to understand or improve models. They are applied either on the input or output side of the trainable model. Each principle is applied to at least two DL tasks, leading to task-specific implementations. To understand DL models, we propose to use Generated Input Data coming from a controllable generation process requiring knowledge about the data properties. This way, we can understand the model's behavior by analyzing how it changes when one specific high-level input feature changes in the generated data. On the output side, Gradient-Based Attribution methods create a gradient at the end of the NN and then propagate it back to the input, indicating which low-level input features have a large influence on the model's prediction. The resulting input features can be interpreted by humans using domain knowledge. To improve the trainable model in terms of downstream performance, data and compute efficiency, or robustness to unwanted features, we explore principles that each address one of the training components besides the trainable model. Input Masking and Augmentation directly modifies the training input data, integrating knowledge about the data and its impact on the model's output. We also explore the use of Feature Extraction using Pretrained Multimodal Models which can be seen as a beneficial preprocessing step to extract useful features. When no training data is available for the downstream task, using such features and domain knowledge expressed in other modalities can result in a Zero-Shot Learning (ZSL) setting, completely eliminating the trainable model. The Weak Label Generation principle produces new desired outputs using knowledge about the labels, giving either a good pretraining or even exclusive training dataset to solve the downstream task. Finally, improving and choosing the right Loss Function is another principle we explore in this thesis. Here, we enrich existing loss functions with knowledge about label interactions or utilize and combine multiple task-specific loss functions in a multitask setting. We apply the principles to classification, regression, and representation tasks as well as to image and text modalities. We propose, apply, and evaluate existing and novel methods to understand and improve the model. Overall, this thesis introduces and evaluates methods that complement the development and choice of DL model architectures.}, subject = {Deep learning}, language = {en} } @phdthesis{Kluegl2015, author = {Kl{\"u}gl, Peter}, title = {Context-specific Consistencies in Information Extraction: Rule-based and Probabilistic Approaches}, publisher = {W{\"u}rzburg University Press}, address = {W{\"u}rzburg}, isbn = {978-3-95826-018-4 (print)}, doi = {10.25972/WUP-978-3-95826-019-1}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-108352}, school = {W{\"u}rzburg University Press}, year = {2015}, abstract = {Large amounts of communication, documentation as well as knowledge and information are stored in textual documents. Most often, these texts like webpages, books, tweets or reports are only available in an unstructured representation since they are created and interpreted by humans. In order to take advantage of this huge amount of concealed information and to include it in analytic processes, it needs to be transformed into a structured representation. Information extraction considers exactly this task. It tries to identify well-defined entities and relations in unstructured data and especially in textual documents. Interesting entities are often consistently structured within a certain context, especially in semi-structured texts. However, their actual composition varies and is possibly inconsistent among different contexts. Information extraction models stay behind their potential and return inferior results if they do not consider these consistencies during processing. This work presents a selection of practical and novel approaches for exploiting these context-specific consistencies in information extraction tasks. The approaches direct their attention not only to one technique, but are based on handcrafted rules as well as probabilistic models. A new rule-based system called UIMA Ruta has been developed in order to provide optimal conditions for rule engineers. This system consists of a compact rule language with a high expressiveness and strong development support. Both elements facilitate rapid development of information extraction applications and improve the general engineering experience, which reduces the necessary efforts and costs when specifying rules. The advantages and applicability of UIMA Ruta for exploiting context-specific consistencies are illustrated in three case studies. They utilize different engineering approaches for including the consistencies in the information extraction task. Either the recall is increased by finding additional entities with similar composition, or the precision is improved by filtering inconsistent entities. Furthermore, another case study highlights how transformation-based approaches are able to correct preliminary entities using the knowledge about the occurring consistencies. The approaches of this work based on machine learning rely on Conditional Random Fields, popular probabilistic graphical models for sequence labeling. They take advantage of a consistency model, which is automatically induced during processing the document. The approach based on stacked graphical models utilizes the learnt descriptions as feature functions that have a static meaning for the model, but change their actual function for each document. The other two models extend the graph structure with additional factors dependent on the learnt model of consistency. They include feature functions for consistent and inconsistent entities as well as for additional positions that fulfill the consistencies. The presented approaches are evaluated in three real-world domains: segmentation of scientific references, template extraction in curricula vitae, and identification and categorization of sections in clinical discharge letters. They are able to achieve remarkable results and provide an error reduction of up to 30\% compared to usually applied techniques.}, subject = {Information Extraction}, language = {en} } @phdthesis{Kleineisel2024, author = {Kleineisel, Jonas}, title = {Variational networks in magnetic resonance imaging - Application to spiral cardiac MRI and investigations on image quality}, doi = {10.25972/OPUS-34737}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-347370}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Acceleration is a central aim of clinical and technical research in magnetic resonance imaging (MRI) today, with the potential to increase robustness, accessibility and patient comfort, reduce cost, and enable entirely new kinds of examinations. A key component in this endeavor is image reconstruction, as most modern approaches build on advanced signal and image processing. Here, deep learning (DL)-based methods have recently shown considerable potential, with numerous publications demonstrating benefits for MRI reconstruction. However, these methods often come at the cost of an increased risk for subtle yet critical errors. Therefore, the aim of this thesis is to advance DL-based MRI reconstruction, while ensuring high quality and fidelity with measured data. A network architecture specifically suited for this purpose is the variational network (VN). To investigate the benefits these can bring to non-Cartesian cardiac imaging, the first part presents an application of VNs, which were specifically adapted to the reconstruction of accelerated spiral acquisitions. The proposed method is compared to a segmented exam, a U-Net and a compressed sensing (CS) model using qualitative and quantitative measures. While the U-Net performed poorly, the VN as well as the CS reconstruction showed good output quality. In functional cardiac imaging, the proposed real-time method with VN reconstruction substantially accelerates examinations over the gold-standard, from over 10 to just 1 minute. Clinical parameters agreed on average. Generally in MRI reconstruction, the assessment of image quality is complex, in particular for modern non-linear methods. Therefore, advanced techniques for precise evaluation of quality were subsequently demonstrated. With two distinct methods, resolution and amplification or suppression of noise are quantified locally in each pixel of a reconstruction. Using these, local maps of resolution and noise in parallel imaging (GRAPPA), CS, U-Net and VN reconstructions were determined for MR images of the brain. In the tested images, GRAPPA delivers uniform and ideal resolution, but amplifies noise noticeably. The other methods adapt their behavior to image structure, where different levels of local blurring were observed at edges compared to homogeneous areas, and noise was suppressed except at edges. Overall, VNs were found to combine a number of advantageous properties, including a good trade-off between resolution and noise, fast reconstruction times, and high overall image quality and fidelity of the produced output. Therefore, this network architecture seems highly promising for MRI reconstruction.}, subject = {Kernspintomografie}, language = {en} } @article{HermJanieschFuchs2022, author = {Herm, Lukas-Valentin and Janiesch, Christian and Fuchs, Patrick}, title = {Der Einfluss von menschlichen Denkmustern auf k{\"u}nstliche Intelligenz - eine strukturierte Untersuchung von kognitiven Verzerrungen}, series = {HMD Praxis der Wirtschaftsinformatik}, volume = {59}, journal = {HMD Praxis der Wirtschaftsinformatik}, number = {2}, issn = {1436-3011}, doi = {10.1365/s40702-022-00844-1}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-323787}, pages = {556-571}, year = {2022}, abstract = {K{\"u}nstliche Intelligenz (KI) dringt vermehrt in sensible Bereiche des allt{\"a}glichen menschlichen Lebens ein. Es werden nicht mehr nur noch einfache Entscheidungen durch intelligente Systeme getroffen, sondern zunehmend auch komplexe Entscheidungen. So entscheiden z. B. intelligente Systeme, ob Bewerber in ein Unternehmen eingestellt werden sollen oder nicht. Oftmals kann die zugrundeliegende Entscheidungsfindung nur schwer nachvollzogen werden und ungerechtfertigte Entscheidungen k{\"o}nnen dadurch unerkannt bleiben, weshalb die Implementierung einer solchen KI auch h{\"a}ufig als sogenannte Blackbox bezeichnet wird. Folglich steigt die Bedrohung, durch unfaire und diskriminierende Entscheidungen einer KI benachteiligt behandelt zu werden. Resultieren diese Verzerrungen aus menschlichen Handlungen und Denkmustern spricht man von einer kognitiven Verzerrung oder einem kognitiven Bias. Aufgrund der Neuigkeit dieser Thematik ist jedoch bisher nicht ersichtlich, welche verschiedenen kognitiven Bias innerhalb eines KI-Projektes auftreten k{\"o}nnen. Ziel dieses Beitrages ist es, anhand einer strukturierten Literaturanalyse, eine gesamtheitliche Darstellung zu erm{\"o}glichen. Die gewonnenen Erkenntnisse werden anhand des in der Praxis weit verbreiten Cross-Industry Standard Process for Data Mining (CRISP-DM) Modell aufgearbeitet und klassifiziert. Diese Betrachtung zeigt, dass der menschliche Einfluss auf eine KI in jeder Entwicklungsphase des Modells gegeben ist und es daher wichtig ist „mensch-{\"a}hnlichen" Bias in einer KI explizit zu untersuchen.}, language = {de} } @phdthesis{Hein2014, author = {Hein, Michael}, title = {Entwicklung computergest{\"u}tzter Methoden zur Bewertung von Docking-L{\"o}sungen und Entwurf niedermolekularer MIP-Inhibitoren}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-101585}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2014}, abstract = {Dockingbasierte Ans{\"a}tze z{\"a}hlen zu den wichtigsten Komponenten im virtuellen Screening. Sie dienen der Vorhersage der Ligandposition und -konformation in der Bindetasche sowie der Absch{\"a}tzung der Bindungsaffinit{\"a}t zum Protein. Bis heute stellt die korrekte Identifizierung proteingebundener Ligandkonformationen ein noch nicht vollst{\"a}ndig gel{\"o}stes Problem f{\"u}r Scoringfunktionen dar. Der erste Teil der vorliegenden Arbeit ist daher der Entwicklung computergest{\"u}tzter Methoden zur Bewertung von Docking-L{\"o}sungen gewidmet. Der Fokus eines ersten Teilprojektes lag auf der Ber{\"u}cksichtigung der Abs{\"a}ttigung vergrabener Wasserstoffbr{\"u}ckenakzeptoren (HBA) und -donoren (HBD) bei der Bewertung von Docking-L{\"o}sungen. Nicht-abges{\"a}ttigte vergrabene HBA und HBD stellen einen der Bindungsaffinit{\"a}t abtr{\"a}glichen Beitrag dar, der bis dato aufgrund fehlender Struktur- bzw. Affinit{\"a}tsdaten in Scoringfunktionen vernachl{\"a}ssigt wird. Im Rahmen der vorliegenden Arbeit wurde auf der Basis einer detaillierten Untersuchung zur H{\"a}ufigkeit vergrabener nicht-abges{\"a}ttigter HBA und HBD in hochaufgel{\"o}sten Protein-Ligand-Komplexen des Hartshorn-Datensatzes eine empirische Filterfunktion („vnaHB"-Filterfunktion) entwickelt, die unerw{\"u}nschte Ligandbindeposen erkennt und von der Bewertung mittels Scoringfunktionen ausschließt. Der praktische Nutzen der empirischen Filterfunktion wurde f{\"u}r die Scoringfunktionen SFCscore und DSX anhand vorgenerierter Docking-L{\"o}sungen des Cheng-Datensatzes untersucht. Die H{\"a}ufigkeitsuntersuchung zeigt, dass eine Abs{\"a}ttigung vergrabener polarer Gruppen in Protein-Ligand-Komplexen f{\"u}r eine hochaffine Protein-Ligand-Bindung notwendig ist, da vergrabene nicht-abges{\"a}ttigte HBA und HBD nur selten auftreten. Eine vollst{\"a}ndige Abs{\"a}ttigung durch entsprechende Proteinpartner wird f{\"u}r ca. 48 \% der untersuchten Komplexe beobachtet, ca. 92 \% weisen weniger als drei haupts{\"a}chlich schwache, nicht-abges{\"a}ttigte HBA bzw. HBD (z. B. Etherfunktionen) auf. Unter Einbeziehung von Wassermolek{\"u}len in die H{\"a}ufigkeitsanalyse sind sogar f{\"u}r ca. 61 \% aller Komplexe alle wasserstoffbr{\"u}ckenbindenden Gruppen abges{\"a}ttigt. Im Gegensatz zu DSX werden f{\"u}r SFCscore nach Anwendung der empirischen Filterfunktion erh{\"o}hte Erfolgsraten f{\"u}r das Auffinden einer kristallnahen Pose (≤ 2.0 {\AA} Abweichung) unter den am besten bewerteten Docking-Posen erzielt. F{\"u}r die beste SFCscore-Funktion (SFCscore::229m) werden Steigerungen dieses als „Docking Power" bezeichneten Kriteriums f{\"u}r die Top-3-Posen (Erfolgsrate f{\"u}r die Identifizierung einer kristallnahen 2.0 {\AA} Pose unter den besten drei Docking-L{\"o}sungen) von 63.1 \% auf 64.2 \% beobachtet. In einem weiteren Teilprojekt wurden repulsive Protein-Ligand-Kontakte infolge sterischer {\"U}berlappungen der Bindungspartner bei der Bewertung von Docking-L{\"o}sungen ber{\"u}cksichtigt. Die ad{\"a}quate Einbeziehung solcher repulsiver Kontakte im Scoring ist f{\"u}r die Identifizierung proteingebundener Ligandkonformationen entscheidend, jedoch aufgrund fehlender Affinit{\"a}ts- bzw. Strukturdaten problematisch. Im Rahmen der vorliegenden Arbeit wurde auf der Basis des Lennard-Jones-Potentiales des AMBER-Kraftfeldes zun{\"a}chst ein neuer Deskriptor zur Beschreibung repulsiver Kontakte („Clash"-Deskriptor) entwickelt und zur Untersuchung der H{\"a}ufigkeit ung{\"u}nstiger Protein-Ligand-Kontakte in hochaufgel{\"o}sten Protein-Ligand-Komplexen des Hartshorn-Datensatzes herangezogen. Eine aus der H{\"a}ufigkeitsverteilung abgeleitete empirische Filterfunktion („Clash"-Filterfunktion) wurde anschließend der Bewertung von Docking-L{\"o}sungen des Cheng-Datensatzes mittels der Scoringfunktionen SFCscore und DSX vorgeschaltet, um unerw{\"u}nschte Ligandbindeposen auszuschließen. Die H{\"a}ufigkeitsuntersuchung zeigt, dass vorwiegend schwache repulsive Kontakte in Protein-Ligand-Komplexen auftreten. So werden in 75 \% der Komplexe des Hartshorn-Datensatzes abstoßende Potentiale unter 0.462 kcal/mol beobachtet. Zwar betragen die ung{\"u}nstigen Beitr{\"a}ge pro Komplex f{\"u}r 50 \% aller Strukturen ca. 0.8 kcal/mol bis 2.5 kcal/mol, jedoch k{\"o}nnen diese auf Ungenauigkeiten der Kristallstrukturen zur{\"u}ckzuf{\"u}hren sein bzw. durch g{\"u}nstige Protein-Ligand-Wechselwirkungen kompensiert werden. Die Anwendung der „Clash"-Filterfunktion zeigt signifikante Verbesserungen der Docking Power f{\"u}r SFCscore. F{\"u}r die beste SFCscore-Funktion (SFCscore::frag) werden Steigerungen der Erfolgsraten f{\"u}r das Auffinden einer kristallnahen Pose unter den drei am besten bewerteten Docking-L{\"o}sungen von 61.4 \% auf 86.9 \% erzielt, was an die Docking Power der bis dato besten Scoringfunktionen aus der Literatur (z. B. DSX, GlideScore::SP) heranreicht (Docking Power (DSX): 92.6 \%; Docking Power (GlideScore::SP): 86.9 \%). Die „Clash"-Filterfunktion allein ist auch der Kombination der „Clash"- und der „vnaHB"-Filterfunktion {\"u}berlegen. Ein weiterer Schwerpunkt der vorliegenden Arbeit wurde auf die Einbeziehung von Decoy-Daten (Struktur- und Affinit{\"a}tsdaten schwach affiner und inaktiver Liganden) im Zuge der Entwicklung computergest{\"u}tzter Methoden zur Bewertung von Docking-L{\"o}sungen gelegt. Dadurch soll eine ad{\"a}quate Ber{\"u}cksichtigung ung{\"u}nstiger Beitr{\"a}ge zur Bindungsaffinit{\"a}t erm{\"o}glicht werden, die f{\"u}r die Richtigkeit und Zuverl{\"a}ssigkeit ermittelter Vorhersagen essentiell ist. In der vorliegenden Arbeit wurden bin{\"a}re Klassifizierungsmodelle zur Bewertung von Docking-L{\"o}sungen entwickelt, die die Einbeziehung von Decoy-Daten ohne die Verf{\"u}gbarkeit von Affinit{\"a}tsdaten erlauben. Der Random-Forest-Algorithmus (RF), SFCscore-Deskriptoren, der neu entwickelte „Clash"-Deskriptor, und die Decoy-Datens{\"a}tze von Cheng und Huang (Trainingsdaten) bilden die Grundlage des leistungsf{\"a}higsten Klassifizierungsmodells. Der praktische Nutzen des „besten" RF-Modells wurde nach Kombination mit der Scoringfunktion DSX anhand der Docking Power f{\"u}r das Auffinden einer kristallnahen Pose auf Rang 1 am unabh{\"a}ngigen Cheng-/Huang- (Komplexe, die nicht in den Trainingsdaten enthalten sind) und CSAR-2012-Testdatensatz untersucht. Gegen{\"u}ber einer alleinigen Anwendung von DSX werden an beiden Testdatens{\"a}tzen weitere Verbesserungen der Docking Power erzielt (Cheng-/Huang-Testdatensatz: DSX 84.24 \%, RF 87.27 \%; CSAR-2012-Testdatensatz: DSX 87.93 \%, RF 91.38 \%). Das „beste" Modell zeichnet sich durch die zuverl{\"a}ssige Vorhersage richtig-positiver Docking-L{\"o}sungen f{\"u}r einige wenige Komplexe aus, f{\"u}r die DSX keine kristallnahe Ligandkonformation identifizieren kann. Ein visueller Vergleich der jeweils am besten bewerteten RF- und DSX-Pose f{\"u}r diese Komplexe zeigt Vorteile des RF-Modells hinsichtlich der Erkennung f{\"u}r die Protein-Ligand-Bindung essentieller Wechselwirkungen. Die Untersuchung der Bedeutung einzelner SFCscore-Deskriptoren f{\"u}r die Klassifizierung von Docking-L{\"o}sungen sowie die Analyse der Misserfolge nach Anwendung des Modells geben wertvolle Hinweise zur weiteren Optimierung der bestehenden Methode. Hinsichtlich der zu bewertenden Eigenschaften ausgeglichenere Trainingsdaten, Weiterentwicklungen bestehender SFCscore-Deskriptoren sowie die Implementierung neuer Deskriptoren zur Beschreibung bis dato nicht-ber{\"u}cksichtigter Beitr{\"a}ge zur Bindungsaffinit{\"a}t stellen Ansatzpunkte zur Verbesserung dar. Der zweite Teil der vorliegenden Arbeit umfasst die Anwendung dockingbasierter Methoden im Rahmen der Entwicklung neuer Inhibitoren des „Macrophage Infectivity Potentiator"-(MIP)-Proteins von Legionella pneumophila und Burkholderia pseudomallei. Das MIP-Protein von Legionella pneumophila stellt einen wichtigen Virulenzfaktor und daher ein attraktives Zielprotein f{\"u}r die Therapie der Legionellose dar. Im Rahmen der vorliegenden Arbeit erfolgten systematische Optimierungen des Pipecolins{\"a}ure-Sulfonamides 1, des bis dato besten niedermolekularen MIP-Inhibitors (IC50 (1): 9 ± 0.7 µM). Nach Hot-Spot-Analysen der Bindetasche wurden Docking-Studien zur Auswahl aussichtsreicher Kandidaten f{\"u}r die Synthese und Testung auf MIP-Inhibition durchgef{\"u}hrt. Die Ergebnisse der Hot-Spot-Analysen zeigen g{\"u}nstige Wechselwirkungsbereiche f{\"u}r Donorgruppen und hydrophobe Substituenten in meta-Position sowie Akzeptorgruppen in para-Position des Benzylringes von 1 auf. Die Einf{\"u}hrung einer Nitrofunktion in para-Position des Benzylringes von 1 (2h) resultiert in einer erh{\"o}hten MIP-Inhibition (IC50 (2h): 5 ± 1.5 µM), was wahrscheinlich auf die Ausbildung einer zus{\"a}tzlichen Wasserstoffbr{\"u}cke zu Gly116 zur{\"u}ckzuf{\"u}hren ist. Selektivit{\"a}tsverbesserungen gegen{\"u}ber dem strukturverwandten humanen FKBP12-Protein werden insbesondere f{\"u}r das para-Aminoderivat von 1 (2n) erzielt (Selektivit{\"a}tsindex (1): 45, Selektivit{\"a}tsindex (2n): 4.2; mit Selektivit{\"a}tsindex = IC50 (MIP)/IC50 (FKBP12)). Der Ersatz des hydrophoben Trimethoxyphenylrestes von 1 durch einen Pyridinring (2s) f{\"u}hrt zu einer verbesserten L{\"o}slichkeit bei vergleichbarer MIP-Inhibition. Das MIP-Protein von Burkholderia pseudomallei spielt eine wichtige Rolle in der Pathogenese der Melioidose und stellt daher ein attraktives Zielprotein f{\"u}r die Entwicklung neuer Arzneistoffe dar. In der vorliegenden Arbeit erfolgten Optimierungen des bis dato besten niedermolekularen MIP-Inhibitors 1. Ausgehend von einem Strukturvergleich von Burkholderia pseudomallei MIP mit Legionella pneumophila MIP und einer Hot-Spot-Analyse der Burkholderia pseudomallei MIP-Bindetasche wurden Docking-Studien zur Auswahl aussichtsreicher Kandidaten f{\"u}r die Synthese und Testung auf MIP-Inhibition durchgef{\"u}hrt. Der Strukturvergleich zeigt eine hohe Homologie beider Bindetaschen. Gr{\"o}ßere konformelle {\"A}nderungen werden lediglich f{\"u}r den von Ala94, Gly95, Val97 und Ile98 geformten Bindetaschenbereich beobachtet, was unterschiedliche Optimierungsstrategien f{\"u}r 1 erforderlich macht. G{\"u}nstige Wechselwirkungsbereiche der Burkholderia pseudomallei MIP-Bindetasche finden sich einerseits f{\"u}r Donorgruppen oder hydrophobe Substituenten in para-Position des Benzylringes (Region A) von 1, andererseits f{\"u}r Akzeptor- bzw. Donorgruppen in para- bzw. meta-/para-Position des Trimethoxyphenylringes (Region B). Anhand von Docking-Studien konnten sowohl f{\"u}r Variationen in Region A als auch in Region B aussichtsreiche Kandidaten identifiziert werden. Initiale MIP-Inhibitionsmessungen der bis dato synthetisierten Derivate deuten auf erh{\"o}hte Hemmungen im Vergleich zu 1 hin. Der Ersatz des hydrophoben Trimethoxyphenylrestes von 1 durch einen Pyridinring f{\"u}hrt auch hier zu vergleichbarer MIP-Inhibition bei verbesserter L{\"o}slichkeit. Derzeit sind weitere Synthesen und Testungen aussichtsreicher Liganden durch die Kooperationspartner geplant. Die Ergebnisse der Inhibitionsmessungen sollen deren Nutzen als MIP-Inhibitoren aufzeigen und wertvolle Informationen f{\"u}r weitere Zyklen des strukturbasierten Wirkstoffdesigns liefern.}, subject = {Arzneimitteldesign}, language = {de} } @phdthesis{Hauser2020, author = {Hauser, Matthias}, title = {Smart Store Applications in Fashion Retail}, doi = {10.25972/OPUS-19301}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-193017}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2020}, abstract = {Traditional fashion retailers are increasingly hard-pressed to keep up with their digital competitors. In this context, the re-invention of brick-and-mortar stores as smart retail environments is being touted as a crucial step towards regaining a competitive edge. This thesis describes a design-oriented research project that deals with automated product tracking on the sales floor and presents three smart fashion store applications that are tied to such localization information: (i) an electronic article surveillance (EAS) system that distinguishes between theft and non-theft events, (ii) an automated checkout system that detects customers' purchases when they are leaving the store and associates them with individual shopping baskets to automatically initiate payment processes, and (iii) a smart fitting room that detects the items customers bring into individual cabins and identifies the items they are currently most interested in to offer additional customer services (e.g., product recommendations or omnichannel services). The implementation of such cyberphysical systems in established retail environments is challenging, as architectural constraints, well-established customer processes, and customer expectations regarding privacy and convenience pose challenges to system design. To overcome these challenges, this thesis leverages Radio Frequency Identification (RFID) technology and machine learning techniques to address the different detection tasks. To optimally configure the systems and draw robust conclusions regarding their economic value contribution, beyond technological performance criteria, this thesis furthermore introduces a service operations model that allows mapping the systems' technical detection characteristics to business relevant metrics such as service quality and profitability. This analytical model reveals that the same system component for the detection of object transitions is well suited for the EAS application but does not have the necessary high detection accuracy to be used as a component of an automated checkout system.}, subject = {Laden}, language = {en} } @phdthesis{Gruendler2018, author = {Gr{\"u}ndler, Klaus}, title = {A Contribution to the Empirics of Economic Development - The Role of Technology, Inequality, and the State}, edition = {1. Auflage}, publisher = {W{\"u}rzburg University Press}, address = {W{\"u}rzburg}, isbn = {978-3-95826-072-6 (Print)}, doi = {10.25972/WUP-978-3-95826-073-3}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-141520}, school = {W{\"u}rzburg University Press}, pages = {300}, year = {2018}, abstract = {This dissertation contributes to the empirical analysis of economic development. The continuing poverty in many Sub-Saharan-African countries as well as the declining trend in growth in the advanced economies that was initiated around the turn of the millennium raises a number of new questions which have received little attention in recent empirical studies. Is culture a decisive factor for economic development? Do larger financial markets trigger positive stimuli with regard to incomes, or is the recent increase in their size in advanced economies detrimental to economic growth? What causes secular stagnation, i.e. the reduction in growth rates of the advanced economies observable over the past 20 years? What is the role of inequality in the growth process, and how do governmental attempts to equalize the income distribution affect economic development? And finally: Is the process of democratization accompanied by an increase in living standards? These are the central questions of this doctoral thesis. To facilitate the empirical analysis of the determinants of economic growth, this dissertation introduces a new method to compute classifications in the field of social sciences. The approach is based on mathematical algorithms of machine learning and pattern recognition. Whereas the construction of indices typically relies on arbitrary assumptions regarding the aggregation strategy of the underlying attributes, utilization of Support Vector Machines transfers the question of how to aggregate the individual components into a non-linear optimization problem. Following a brief overview of the theoretical models of economic growth provided in the first chapter, the second chapter illustrates the importance of culture in explaining the differences in incomes across the globe. In particular, if inhabitants have a lower average degree of risk-aversion, the implementation of new technology proceeds much faster compared with countries with a lower tendency towards risk. However, this effect depends on the legal and political framework of the countries, their average level of education, and their stage of development. The initial wealth of individuals is often not sufficient to cover the cost of investments in both education and new technologies. By providing loans, a developed financial sector may help to overcome this shortage. However, the investigations in the third chapter show that this mechanism is dependent on the development levels of the economies. In poor countries, growth of the financial sector leads to better education and higher investment levels. This effect diminishes along the development process, as intermediary activity is increasingly replaced by speculative transactions. Particularly in times of low technological innovation, an increasing financial sector has a negative impact on economic development. In fact, the world economy is currently in a phase of this kind. Since the turn of the millennium, growth rates in the advanced economies have experienced a multi-national decline, leading to an intense debate about "secular stagnation" initiated at the beginning of 2015. The fourth chapter deals with this phenomenon and shows that the growth potentials of new technologies have been gradually declining since the beginning of the 2000s. If incomes are unequally distributed, some individuals can invest less in education and technological innovations, which is why the fifth chapter identifies an overall negative effect of inequality on growth. This influence, however, depends on the development level of countries. While the negative effect is strongly pronounced in poor economies with a low degree of equality of opportunity, this influence disappears during the development process. Accordingly, redistributive polices of governments exert a growth-promoting effect in developing countries, while in advanced economies, the fostering of equal opportunities is much more decisive. The sixth chapter analyzes the growth effect of the political environment and shows that the ambiguity of earlier studies is mainly due to unsophisticated measurement of the degree of democratization. To solve this problem, the chapter introduces a new method based on mathematical algorithms of machine learning and pattern recognition. While the approach can be used for various classification problems in the field of social sciences, in this dissertation it is applied for the problem of democracy measurement. Based on different country examples, the chapter shows that the resulting SVMDI is superior to other indices in modeling the level of democracy. The subsequent empirical analysis emphasizes a significantly positive growth effect of democracy measured via SVMDI.}, subject = {Wirtschaftsentwicklung}, language = {en} } @phdthesis{Grohmann2022, author = {Grohmann, Johannes Sebastian}, title = {Model Learning for Performance Prediction of Cloud-native Microservice Applications}, doi = {10.25972/OPUS-26160}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-261608}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2022}, abstract = {One consequence of the recent coronavirus pandemic is increased demand and use of online services around the globe. At the same time, performance requirements for modern technologies are becoming more stringent as users become accustomed to higher standards. These increased performance and availability requirements, coupled with the unpredictable usage growth, are driving an increasing proportion of applications to run on public cloud platforms as they promise better scalability and reliability. With data centers already responsible for about one percent of the world's power consumption, optimizing resource usage is of paramount importance. Simultaneously, meeting the increasing and changing resource and performance requirements is only possible by optimizing resource management without introducing additional overhead. This requires the research and development of new modeling approaches to understand the behavior of running applications with minimal information. However, the emergence of modern software paradigms makes it increasingly difficult to derive such models and renders previous performance modeling techniques infeasible. Modern cloud applications are often deployed as a collection of fine-grained and interconnected components called microservices. Microservice architectures offer massive benefits but also have broad implications for the performance characteristics of the respective systems. In addition, the microservices paradigm is typically paired with a DevOps culture, resulting in frequent application and deployment changes. Such applications are often referred to as cloud-native applications. In summary, the increasing use of ever-changing cloud-hosted microservice applications introduces a number of unique challenges for modeling the performance of modern applications. These include the amount, type, and structure of monitoring data, frequent behavioral changes, or infrastructure variabilities. This violates common assumptions of the state of the art and opens a research gap for our work. In this thesis, we present five techniques for automated learning of performance models for cloud-native software systems. We achieve this by combining machine learning with traditional performance modeling techniques. Unlike previous work, our focus is on cloud-hosted and continuously evolving microservice architectures, so-called cloud-native applications. Therefore, our contributions aim to solve the above challenges to deliver automated performance models with minimal computational overhead and no manual intervention. Depending on the cloud computing model, privacy agreements, or monitoring capabilities of each platform, we identify different scenarios where performance modeling, prediction, and optimization techniques can provide great benefits. Specifically, the contributions of this thesis are as follows: Monitorless: Application-agnostic prediction of performance degradations. To manage application performance with only platform-level monitoring, we propose Monitorless, the first truly application-independent approach to detecting performance degradation. We use machine learning to bridge the gap between platform-level monitoring and application-specific measurements, eliminating the need for application-level monitoring. Monitorless creates a single and holistic resource saturation model that can be used for heterogeneous and untrained applications. Results show that Monitorless infers resource-based performance degradation with 97\% accuracy. Moreover, it can achieve similar performance to typical autoscaling solutions, despite using less monitoring information. SuanMing: Predicting performance degradation using tracing. We introduce SuanMing to mitigate performance issues before they impact the user experience. This contribution is applied in scenarios where tracing tools enable application-level monitoring. SuanMing predicts explainable causes of expected performance degradations and prevents performance degradations before they occur. Evaluation results show that SuanMing can predict and pinpoint future performance degradations with an accuracy of over 90\%. SARDE: Continuous and autonomous estimation of resource demands. We present SARDE to learn application models for highly variable application deployments. This contribution focuses on the continuous estimation of application resource demands, a key parameter of performance models. SARDE represents an autonomous ensemble estimation technique. It dynamically and continuously optimizes, selects, and executes an ensemble of approaches to estimate resource demands in response to changes in the application or its environment. Through continuous online adaptation, SARDE efficiently achieves an average resource demand estimation error of 15.96\% in our evaluation. DepIC: Learning parametric dependencies from monitoring data. DepIC utilizes feature selection techniques in combination with an ensemble regression approach to automatically identify and characterize parametric dependencies. Although parametric dependencies can massively improve the accuracy of performance models, DepIC is the first approach to automatically learn such parametric dependencies from passive monitoring data streams. Our evaluation shows that DepIC achieves 91.7\% precision in identifying dependencies and reduces the characterization prediction error by 30\% compared to the best individual approach. Baloo: Modeling the configuration space of databases. To study the impact of different configurations within distributed DBMSs, we introduce Baloo. Our last contribution models the configuration space of databases considering measurement variabilities in the cloud. More specifically, Baloo dynamically estimates the required benchmarking measurements and automatically builds a configuration space model of a given DBMS. Our evaluation of Baloo on a dataset consisting of 900 configuration points shows that the framework achieves a prediction error of less than 11\% while saving up to 80\% of the measurement effort. Although the contributions themselves are orthogonally aligned, taken together they provide a holistic approach to performance management of modern cloud-native microservice applications. Our contributions are a significant step forward as they specifically target novel and cloud-native software development and operation paradigms, surpassing the capabilities and limitations of previous approaches. In addition, the research presented in this paper also has a significant impact on the industry, as the contributions were developed in collaboration with research teams from Nokia Bell Labs, Huawei, and Google. Overall, our solutions open up new possibilities for managing and optimizing cloud applications and improve cost and energy efficiency.}, subject = {Cloud Computing}, language = {en} } @phdthesis{Gold2023, author = {Gold, Lukas}, title = {Methods for the state estimation of lithium-ion batteries}, doi = {10.25972/OPUS-30618}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-306180}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2023}, abstract = {This work introduced the reader to all relevant fields to tap into an ultrasound-based state of charge estimation and provides a blueprint for the procedure to achieve and test the fundamentals of such an approach. It spanned from an in-depth electrochemical characterization of the studied battery cells over establishing the measurement technique, digital processing of ultrasonic transmission signals, and characterization of the SoC dependent property changes of those signals to a proof of concept of an ultrasound-based state of charge estimation. The State of the art \& theoretical background chapter focused on the battery section on the mechanical property changes of lithium-ion batteries during operation. The components and the processes involved to manufacture a battery cell were described to establish the fundamentals for later interrogation. A comprehensive summary of methods for state estimation was given and an emphasis was laid on mechanical methods, including a critical review of the most recent research on ultrasound-based state estimation. Afterward, the fundamentals of ultrasonic non-destructive evaluation were introduced, starting with the sound propagation modes in isotropic boundary-free media, followed by the introduction of boundaries and non-isotropic structure to finally approach the class of fluid-saturated porous media, which batteries can be counted to. As the processing of the ultrasonic signals transmitted through lithium-ion battery cells with the aim of feature extraction was one of the main goals of this work, the fundamentals of digital signal processing and methods for the time of flight estimation were reviewed and compared in a separate section. All available information on the interrogated battery cell and the instrumentation was collected in the Experimental methods \& instrumentation chapter, including a detailed step-by-step manual of the process developed in this work to create and attach a sensor stack for ultrasonic interrogation based on low-cost off-the-shelf piezo elements. The Results \& discussion chapter opened with an in-depth electrochemical and post-mortem interrogation to reverse engineer the battery cell design and its internal structure. The combination of inductively coupled plasma-optical emission spectrometry and incremental capacity analysis applied to three-electrode lab cells, constructed from the studied battery cell's materials, allowed to identify the SoC ranges in which phase transitions and staging occur and thereby directly links changes in the ultrasonic signal properties with the state of the active materials, which makes this work stand out among other studies on ultrasound-based state estimation. Additional dilatometer experiments were able to prove that the measured effect in ultrasonic time of flight cannot originate from the thickness increase of the battery cells alone, as this thickness increase is smaller and in opposite direction to the change in time of flight. Therefore, changes in elastic modulus and density have to be responsible for the observed effect. The construction of the sensor stack from off-the-shelf piezo elements, its electromagnetic shielding, and attachment to both sides of the battery cells was treated in a subsequent section. Experiments verified the necessity of shielding and its negligible influence on the ultrasonic signals. A hypothesis describing the metal layer in the pouch foil to be the transport medium of an electrical coupling/distortion between sending and receiving sensor was formulated and tested. Impedance spectroscopy was shown to be a useful tool to characterize the resonant behavior of piezo elements and ensure the mechanical coupling of such to the surface of the battery cells. The excitation of the piezo elements by a raised cosine (RCn) waveform with varied center frequency in the range of 50 kHz to 250 kHz was studied in the frequency domain and the influence of the resonant behavior, as identified prior by impedance spectroscopy, on waveform and frequency content was evaluated to be uncritical. Therefore, the forced oscillation produced by this excitation was assumed to be mechanically coupled as ultrasonic waves into the battery cells. The ultrasonic waves transmitted through the battery cell were recorded by piezo elements on the opposing side. A first inspection of the raw, unprocessed signals identified the transmission of two main wave packages and allowed the identification of two major trends: the time of flight of ultrasonic wave packages decreases with the center frequency of the RCn waveform, and with state of charge. These trends were to be assessed further in the subsequent sections. Therefore, methods for the extraction of features (properties) from the ultrasonic signals were established, compared, and tested in a dedicated section. Several simple and advanced thresholding methods were compared with envelope-based and cross-correlation methods to estimate the time of flight (ToF). It was demonstrated that the envelope-based method yields the most robust estimate for the first and second wave package. This finding is in accordance with the literature stating that an envelope-based method is best suited for dispersive, absorptive media [204], to which lithium-ion batteries are counted. Respective trends were already suggested by the heatmap plots of the raw signals vs. RCn frequency and SoC. To enable such a robust estimate, an FIR filter had to be designed to preprocess the transmitted signals and thereby attenuate frequency components that verifiably lead to a distorted shape of the envelope. With a robust ToF estimation method selected, the characterization of the signal properties ToF and transmitted energy content (EC) was performed in-depth. A study of cycle-to-cycle variations unveiled that the signal properties are affected by a long rest period and the associated relaxation of the multi-particle system "battery cell" to equilibrium. In detail, during cycling, the signal properties don't reach the same value at a given SoC in two subsequent cycles if the first of the two cycles follows a long rest period. In accordance with the literature, a break-in period, making up for more than ten cycles post-formation, was observed. During this break-in period, the mechanical properties of the system are said to change until a steady state is reached [25]. Experiments at different C-rate showed that ultrasonic signal properties can sense the non-equilibrium state of a battery cell, characterized by an increasing area between charge and discharge curve of the respective signal property vs. SoC plot. This non-equilibrium state relaxes in the rest period following the discharge after the cut-off voltage is reached. The relaxation in the rest period following the charge is much smaller and shows little C-rate dependency as the state is prepared by constant voltage charging at the end of charge voltage. For a purely statistical SoC estimation approach, as employed in this work, where only instantaneous measurements are taken into account and the historic course of the measurement is not utilized as a source of information, the presence of hysteresis and relaxation leads to a reduced estimation accuracy. Future research should address this issue or even utilize the relaxation to improve the estimation accuracy, by incorporating historic information, e.g., by using the derivative of a signal property as an additional feature. The signal properties were then tested for their correlation with SoC as a function of RCn frequency. This allowed identifying trends in the behavior of the signal properties as a function of RCn frequency and C-rate in a condensed fashion and thereby enabled to predict the frequency range, about 50 kHz to 125 kHz, in which the course of the signal properties is best suited for SoC estimation. The final section provided a proof of concept of the ultrasound-based SoC estimation, by applying a support vector regression (SVR) to before thoroughly studied ultrasonic signal properties, as well as current and battery cell voltage. The included case study was split into different parts that assessed the ability of an SVR to estimate the SoC in a variety of scenarios. Seven battery cells, prepared with sensor stacks attached to both faces, were used to generate 14 datasets. First, a comparison of self-tests, where a portion of a dataset is used for training and another for testing, and cross-tests, which use the dataset of one cell for training and the dataset of another for testing, was performed. A root mean square error (RMSE) of 3.9\% to 4.8\% SoC and 3.6\% to 10.0\% SoC was achieved, respectively. In general, it was observed that the SVR is prone to overestimation at low SoCs and underestimation at high SoCs, which was attributed to the pronounced hysteresis and relaxation of the ultrasonic signal properties in this SoC ranges. The fact that higher accuracy is achieved, if the exact cell is known to the model, indicates that a variation between cells exists. This variation between cells can originate from differences in mechanical properties as a result of production variations or from differences in manual sensor placement, mechanical coupling, or resonant behavior of the ultrasonic sensors. To mitigate the effect of the cell-to-cell variations, a test was performed, where the datasets of six out of the seven cells were combined as training data, and the dataset of the seventh cell was used for testing. This reduced the spread of the RMSE from (3.6 - 10.0)\% SoC to (5.9 - 8.5)\% SoC, respectively, once again stating that a databased approach for state estimation becomes more reliable with a large data basis. Utilizing self-tests on seven datasets, the effect of additional features on the state estimation result was tested. The involvement of an additional feature did not necessarily improve the estimation accuracy, but it was shown that a combination of ultrasonic and electrical features is superior to the training with these features alone. To test the ability of the model to estimate the SoC in unknown cycling conditions, a test was performed where the C-rate of the test dataset was not included in the training data. The result suggests that for practical applications it might be sufficient to perform training with the boundary of the use cases in a controlled laboratory environment to handle the estimation in a broad spectrum of use cases. In comparison with literature, this study stands out by utilizing and modifying off-the-shelf piezo elements to equip state-of-the-art lithium-ion battery cells with ultrasonic sensors, employing a range of center frequencies for the waveform, transmitted through the battery cell, instead of a fixed frequency and by allowing the SVR to choose the frequency that yields the best result. The characterization of the ultrasonic signal properties as a function of RCn frequency and SoC and the assignment of characteristic changes in the signal properties to electrochemical processes, such as phase transitions and staging, makes this work unique. By studying a range of use cases, it was demonstrated that an improved SoC estimation accuracy can be achieved with the aid of ultrasonic measurements - thanks to the correlation of the mechanical properties of the battery cells with the SoC.}, subject = {Lithium-Ionen-Akkumulator}, language = {en} } @phdthesis{Allgaier2024, author = {Allgaier, Johannes}, title = {Machine Learning Explainability on Multi-Modal Data using Ecological Momentary Assessments in the Medical Domain}, doi = {10.25972/OPUS-35118}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-351189}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Introduction. Mobile health (mHealth) integrates mobile devices into healthcare, enabling remote monitoring, data collection, and personalized interventions. Machine Learning (ML), a subfield of Artificial Intelligence (AI), can use mHealth data to confirm or extend domain knowledge by finding associations within the data, i.e., with the goal of improving healthcare decisions. In this work, two data collection techniques were used for mHealth data fed into ML systems: Mobile Crowdsensing (MCS), which is a collaborative data gathering approach, and Ecological Momentary Assessments (EMA), which capture real-time individual experiences within the individual's common environments using questionnaires and sensors. We collected EMA and MCS data on tinnitus and COVID-19. About 15 \% of the world's population suffers from tinnitus. Materials \& Methods. This thesis investigates the challenges of ML systems when using MCS and EMA data. It asks: How can ML confirm or broad domain knowledge? Domain knowledge refers to expertise and understanding in a specific field, gained through experience and education. Are ML systems always superior to simple heuristics and if yes, how can one reach explainable AI (XAI) in the presence of mHealth data? An XAI method enables a human to understand why a model makes certain predictions. Finally, which guidelines can be beneficial for the use of ML within the mHealth domain? In tinnitus research, ML discerns gender, temperature, and season-related variations among patients. In the realm of COVID-19, we collaboratively designed a COVID-19 check app for public education, incorporating EMA data to offer informative feedback on COVID-19-related matters. This thesis uses seven EMA datasets with more than 250,000 assessments. Our analyses revealed a set of challenges: App user over-representation, time gaps, identity ambiguity, and operating system specific rounding errors, among others. Our systematic review of 450 medical studies assessed prior utilization of XAI methods. Results. ML models predict gender and tinnitus perception, validating gender-linked tinnitus disparities. Using season and temperature to predict tinnitus shows the association of these variables with tinnitus. Multiple assessments of one app user can constitute a group. Neglecting these groups in data sets leads to model overfitting. In select instances, heuristics outperform ML models, highlighting the need for domain expert consultation to unveil hidden groups or find simple heuristics. Conclusion. This thesis suggests guidelines for mHealth related data analyses and improves estimates for ML performance. Close communication with medical domain experts to identify latent user subsets and incremental benefits of ML is essential.}, subject = {Maschinelles Lernen}, language = {en} }