Refine
Has Fulltext
- yes (15)
Is part of the Bibliography
- yes (15)
Year of publication
- 2022 (15) (remove)
Document Type
- Doctoral Thesis (12)
- Complete part of issue (3)
Keywords
- University (3)
- Universität (3)
- Wuerzburg (3)
- Wurzburg (3)
- Würzburg (3)
- Cloud Computing (2)
- 1-fach-Origami (1)
- ADHD (1)
- ADHS (1)
- Air pollution (1)
Institute
- Institut für Informatik (3)
- Universität - Fakultätsübergreifend (3)
- Theodor-Boveri-Institut für Biowissenschaften (2)
- Institut für Mathematik (1)
- Institut für Strafrecht und Kriminologie (1)
- Institut für deutsche Philologie (1)
- Kinderklinik und Poliklinik (1)
- Klinik und Poliklinik für Kinder- und Jugendpsychiatrie, Psychosomatik und Psychotherapie (1)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (1)
- Klinik und Polikliniken für Zahn-, Mund- und Kieferkrankheiten (1)
Sonstige beteiligte Institutionen
One consequence of the recent coronavirus pandemic is increased demand and use of online services around the globe. At the same time, performance requirements for modern technologies are becoming more stringent as users become accustomed to higher standards. These increased performance and availability requirements, coupled with the unpredictable usage growth, are driving an increasing proportion of applications to run on public cloud platforms as they promise better scalability and reliability.
With data centers already responsible for about one percent of the world's power consumption, optimizing resource usage is of paramount importance. Simultaneously, meeting the increasing and changing resource and performance requirements is only possible by optimizing resource management without introducing additional overhead. This requires the research and development of new modeling approaches to understand the behavior of running applications with minimal information.
However, the emergence of modern software paradigms makes it increasingly difficult to derive such models and renders previous performance modeling techniques infeasible. Modern cloud applications are often deployed as a collection of fine-grained and interconnected components called microservices. Microservice architectures offer massive benefits but also have broad implications for the performance characteristics of the respective systems. In addition, the microservices paradigm is typically paired with a DevOps culture, resulting in frequent application and deployment changes. Such applications are often referred to as cloud-native applications. In summary, the increasing use of ever-changing cloud-hosted microservice applications introduces a number of unique challenges for modeling the performance of modern applications. These include the amount, type, and structure of monitoring data, frequent behavioral changes, or infrastructure variabilities. This violates common assumptions of the state of the art and opens a research gap for our work.
In this thesis, we present five techniques for automated learning of performance models for cloud-native software systems. We achieve this by combining machine learning with traditional performance modeling techniques. Unlike previous work, our focus is on cloud-hosted and continuously evolving microservice architectures, so-called cloud-native applications. Therefore, our contributions aim to solve the above challenges to deliver automated performance models with minimal computational overhead and no manual intervention. Depending on the cloud computing model, privacy agreements, or monitoring capabilities of each platform, we identify different scenarios where performance modeling, prediction, and optimization techniques can provide great benefits. Specifically, the contributions of this thesis are as follows:
Monitorless: Application-agnostic prediction of performance degradations.
To manage application performance with only platform-level monitoring, we propose Monitorless, the first truly application-independent approach to detecting performance degradation. We use machine learning to bridge the gap between platform-level monitoring and application-specific measurements, eliminating the need for application-level monitoring. Monitorless creates a single and holistic resource saturation model that can be used for heterogeneous and untrained applications. Results show that Monitorless infers resource-based performance degradation with 97% accuracy. Moreover, it can achieve similar performance to typical autoscaling solutions, despite using less monitoring information.
SuanMing: Predicting performance degradation using tracing.
We introduce SuanMing to mitigate performance issues before they impact the user experience. This contribution is applied in scenarios where tracing tools enable application-level monitoring. SuanMing predicts explainable causes of expected performance degradations and prevents performance degradations before they occur. Evaluation results show that SuanMing can predict and pinpoint future performance degradations with an accuracy of over 90%.
SARDE: Continuous and autonomous estimation of resource demands.
We present SARDE to learn application models for highly variable application deployments. This contribution focuses on the continuous estimation of application resource demands, a key parameter of performance models. SARDE represents an autonomous ensemble estimation technique. It dynamically and continuously optimizes, selects, and executes an ensemble of approaches to estimate resource demands in response to changes in the application or its environment. Through continuous online adaptation, SARDE efficiently achieves an average resource demand estimation error of 15.96% in our evaluation.
DepIC: Learning parametric dependencies from monitoring data.
DepIC utilizes feature selection techniques in combination with an ensemble regression approach to automatically identify and characterize parametric dependencies. Although parametric dependencies can massively improve the accuracy of performance models, DepIC is the first approach to automatically learn such parametric dependencies from passive monitoring data streams. Our evaluation shows that DepIC achieves 91.7% precision in identifying dependencies and reduces the characterization prediction error by 30% compared to the best individual approach.
Baloo: Modeling the configuration space of databases.
To study the impact of different configurations within distributed DBMSs, we introduce Baloo. Our last contribution models the configuration space of databases considering measurement variabilities in the cloud. More specifically, Baloo dynamically estimates the required benchmarking measurements and automatically builds a configuration space model of a given DBMS. Our evaluation of Baloo on a dataset consisting of 900 configuration points shows that the framework achieves a prediction error of less than 11% while saving up to 80% of the measurement effort.
Although the contributions themselves are orthogonally aligned, taken together they provide a holistic approach to performance management of modern cloud-native microservice applications.
Our contributions are a significant step forward as they specifically target novel and cloud-native software development and operation paradigms, surpassing the capabilities and limitations of previous approaches.
In addition, the research presented in this paper also has a significant impact on the industry, as the contributions were developed in collaboration with research teams from Nokia Bell Labs, Huawei, and Google.
Overall, our solutions open up new possibilities for managing and optimizing cloud applications and improve cost and energy efficiency.
Ästhetische Restaurationsverfahren sind in den meisten Zahnarztpraxen mittlerweile Routine. Zielsetzung: Diese Studie ging der Frage nach, inwieweit die Aushärtung von Kompositen unter Keramikrestaurationen möglich ist. Material und Methoden: Anhand von 426 mit Tetric Ceram® und 102 mit Variolink® hergestellten Proben wurden folgende Abhängigkeiten berücksichtigt: Lichtgerät und Belichtungsdauer, Schichtstärke und Farbe der Keramik, Härtemodus des Composits - im vorliegenden Fall ob lichthärtendes oder dualhärtendes Composit - und schließlich beim dualhärtenden Composit, der Einfluss einer kurzen Wartezeit zwischen Anmischen der Paste und Lichtpolymerisation. Diskussion: Um eine ausreichende Aushärtung und gute Haftfestigkeit von lichthärtenden Befestigungscomposites unter keramischen Restaurationen zu gewährleisten, ist eine ungehinderte Belichtung mit einem Polymerisationslicht für mindestens 25 Sekunden erforderlich, vorzugsweise aus sechs Richtungen, insbesondere dann, wenn die Keramik dunkler eingefärbt ist. Bei der Bestrahlung von Keramikschichten mit einer Wandstärke ≥ 2 mm werden dualhärtende Komposite und längere Belichtungszeiten empfohlen.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
In der dargestellten Arbeit wurden verschiedene Hypothesen im Hinblick auf die berufliche und gesundheitliche Belastung von Eltern mit Kindern, die an ADHS leiden, untersucht. So wurde zunächst der Fragestellung nachgegangen, in wieweit das von ADHS betroffene Kind in der Familie selbst zu einer erhöhten Belastung der Eltern am Arbeitsplatz und somit zu einer gesteigerten gesundheitlichen Einschränkung führt. Zudem untersuchten wir die Auswirkungen einer möglichen eigenen ADHS-Symptomatik in der Kindheit laut WURS auf die gesundheitliche Verfassung und die Leistungsfähigkeit am Arbeitsplatz. Schließlich wurde in der dritten Hypothese die Frage untersucht, in wieweit ein Effekt der Anzahl betroffener Kinder mit ADHS innerhalb einer Familie feststellbar ist.
Entsprechend wurde eine vergleichende Untersuchung mit einer klinischen Stichprobe (n=91) und einer gesunden Vergleichsstichprobe (n=198) durchgeführt. Um die verschiedenen Einflussfaktoren verifizierbar zu machen, wurden verschiedene Untersuchungsinstrumente in Form von Fragebögen sowohl an die klinische Stichprobe als auch an die Vergleichsstichprobe (Familien, deren Kinder als gesund beschrieben wurden) verteilt. Zur allgemeinen Einschätzung von Verhaltensauffälligkeiten der Kinder in den jeweiligen Familien wurde die Child-Behavior-Checklist von den Eltern ausgefüllt. Zudem schätzten die Eltern über den Fremdbeurteilungsbogen für hyperkinetische Störungen die ADHS-Symptomatik ihrer Kinder ein. Darüber hinaus beurteilten die Eltern eine mögliche eigene ADHS-Symptomatik in der Kindheit über die retrospektiv ausgelegte Wender Utah Rating Scale. Der individuelle Gesundheitszustand der Väter und Mütter wurde über den „EQ-5D“ erfragt, während die Belastung am Arbeitsplatz mittels der Work Limitation Questionnaire ermittelt wurde. Schließlich füllten alle teilnehmenden Eltern einen sozioökonomischen Fragebogen aus, in dem Alter, Geschlecht, Familienstand, Schulabschluss und das Haushaltsnettoeinkommen berücksichtigt wurden.
In zahlreichen, im Diskussionsteil bereits erwähnten Studien wurde eine Mehrbelastung der Eltern festgestellt. In der vorliegenden Arbeit wurden darüber hinaus die konkreten Auswirkungen dieser bereits festgestellten Mehrbelastung auf den Gesundheitszustand und das berufliche Umfeld untersucht. Die Untersuchung dieser Auswirkungen auf das alltägliche Leben der betroffenen Eltern geriet bislang kaum in den Fokus wissenschaftlicher Arbeiten. Um zukünftig betroffene Familien gezielter in unterschiedlichen Lebensbereichen unterstützen zu können ist es jedoch von eminenter Bedeutung, diese Auswirkungen zu kennen und besser zu verstehen.
In den Ergebnissen konnte konkret gezeigt werden, dass bezüglich der Hypothese 1 die Anwesenheit eines ADHS-Kindes innerhalb einer Familie den Gesundheitszustand der Eltern laut Selbsturteil im EQ-5D signifikant beeinflusst. Im Rahmen der beruflichen Belastung war feststellbar, dass ein ADHS-Kind sich signifikant auf die physische Konstitution laut WLQ der Eltern auswirkt. Die Untersuchung der Hypothese II ergab, dass eine mögliche eigene ADHS-Symptomatik in der Kindheit sich auf unterschiedliche Dimensionen im beruflichen Umfeld auswirkt, jedoch nicht signifikant auf den individuellen Gesundheitszustand. Väter und Mütter, die selbst in ihrer Kindheit ADHS-Symptome angaben, geben eine signifikante Beeinträchtigung bezüglich der mentalen Fähigkeiten, des Zeitmanagements und der allgemeinen Arbeitsproduktivität laut Selbsteinschätzung im WLQ an. Eine physische Einschränkung am Arbeitsplatz laut WLQ war bei den Vätern signifikant feststellbar, nicht jedoch bei den Müttern. Die Ergebnisse der Hypothese III ergaben, dass bezüglich der Arbeitsfähigkeit bereits bei einem oder mehr Kindern mit ADHS die kognitiven Fähigkeiten der Eltern am Arbeitsplatz laut WLQ beeinträchtigt sind. Gleichermaßen wird die Arbeitsproduktivität bereits bei einem oder mehr von ADHS betroffenen Kindern signifikant beeinflusst. Auf die physische Konstitution der Eltern laut Selbsteinschätzung im WLQ haben ein oder auch mehrere von ADHS betroffene Kinder jedoch keinen signifikanten Einfluss. Die zeitliche Organisation der Eltern am Arbeitsplatz laut WLQ ist folglich bei einem Kind mit ADHS noch nicht signifikant beeinträchtigt, wohl aber, wenn mehr als ein Kind betroffen ist. Ebenso ist der Gesundheitszustand der Eltern laut EQ-5D erst ab zwei betroffenen Kindern in einer Familie durch diesen Umstand beeinflusst.
Zusammenfassend lässt sich also feststellen, dass durch die Anwesenheit eines Kindes mit ADHS in einer Familie eher der Gesundheitszustand der Eltern signifikant beeinflusst wird, wohingegen die eigene ADHS-Symptomatik der Eltern in der Kindheit viel mehr zu einer signifikanten und mehrdimensionalen Beeinträchtigung am Arbeitsplatz führt. Diese Erkenntnis zeigt, dass die eigene ADHS-Symptomatik der Eltern in der Kindheit neben der Anwesenheit eines ADHS - Kindes nicht unerhebliche Auswirkungen auf die alltäglichen Aufgaben der Betroffenen hat. Die Erkenntnis dieser neuen Zusammenhänge sollte in zukünftigen Forschungsvorhaben berücksichtigt werden.
Chronischer Stress hat negative Folgen, die sich im Verhalten und auf neuronaler Ebene äußern können. Als besonders stressempfindlich gelten die Neurone der dritten Region des hippocampalen Ammonshorns CA3. Sie reagieren auch im bereits ausgereiften Zustand noch sehr sensibel auf äußere Einflüsse, was als neuronale Plastizität bezeichnet wird. Sie erfahren unter anderem durch Stress und Serotonin morphologische und funktionelle Veränderungen. Serotonin-Transporter wahren das Serotonin-Gleichgewicht, indem sie dessen Wirkung schließlich durch Wiederaufnahme in die Zellen beenden. Polymorphismen, also verschiedene Gen-Varianten, bedingen Unterschiede in der Zahl der verfügbaren Transporter. Dieses Wechselspiel zwischen Gen-Varianten des Serotonin-Transporters und Stress wurde an Serotonin-Transporter-Knockout-Mäusen untersucht. Einige Mäuse erfuhren bereits früh im Leben Stress, der entweder anhielt oder im späteren Leben positiven Erfahrungen wich; weitere Mäuse hingegen machten in frühen Lebensabschnitten positive Erfahrungen, die sich später entweder fortsetzten oder durch Stresserfahrungen ersetzt wurden. Nach Durchführung von Verhaltenstests wurde zudem in deren Golgi-imprägnierten Gehirnen die Morphologie der Apikaldendriten von CA3-Kurzschaft-Pyramidenzellen lichtmikroskopisch untersucht und in 3D-Computermodellen abgebildet. Aufgrund regionaler Eigenheiten innerhalb von CA3 wurden diese Neurone verschiedenen Subpopulationen zugeordnet. Tatsächlich konnten mithilfe der Kombination aus vier verschiedenen Lebensgeschichten und drei unterschiedlichen Serotonin-Transporter-Genotypen Unterschiede in der Morphologie der CA3-Pyramidenzellen zwischen den einzelnen Gruppen festgestellt werden. Ohne Stresserleben zeigten sich die Neurone meist signifikant verzweigter; nach Stresserleben zeigten sich, zumindest in einer bestimmten Subpopulation, signifikante Verminderungen der Spines. Mäuse mit zwei oder einem wildtypischen Serotonin-Transporter-Allel und ausschließlich späten aversiven Erfahrungen hatten signifikant längere Apikaldendriten als die Referenz mit zwei wildtypischen Allelen und ohne Stresserfahrung; homozygot Serotonin-Transporter-defiziente Mäuse der gleichen Lebensgeschichte hatten zur Referenz signifikant verkürzte Apikaldendriten. Diese Ergebnisse lassen vermuten, dass Stress in Verbindung mit genetisch bedingt geringen Mengen des Serotonin-Transporters durchaus eine erhöhte Vulnerabilität für psychische Erkrankungen bedingen könnte, aber dass ausschließlich späte Stresserfahrungen bei höheren Mengen des Serotonin-Transporters auch protektiv wirken könnten.
Der Umstand, dass Grammatikschriften miteinander in mannigfaltigen Beziehungen stehen und durch die Auseinandersetzung mit vorangegangenen sprachbeschreibenden Werken wirkmächtige Traditionslinien ausbilden, gehört seit Jahrhunderten zu den von Verfasserinnen und Verfassern solcher Texte gerne aufgegriffenen Gemeinplätzen. Während Wechselbeziehungen auf theoretischer Ebene (etwa im Zuge der Frage nach der Evolution verschiedener Grammatikmodelle) regelmäßig zum Gegenstand meta-grammatikographisch angelegter germanistischer Forschung werden, wurde das breite Feld der in den Texten herangezogenen Sprachbeispiele sowie des zugehörigen Bewertungsvokabulars bisher nur selten dafür verwendet, die Tiefe der Verflechtungen zwischen einzelnen Grammatiktexten auszuloten und die materielle Gestalt entsprechender Traditionen zu eruieren. Noch seltener gerieten dabei Impulse aus dem Bereich des Deutschen als Fremdsprache in den Fokus.
Die vorliegende Arbeit, deren Fundament ein umfangreiches Korpus einschlägiger deutsch- sowie englischsprachiger Grammatiktexte der deutschen Sprache bildet, hat vor diesem Hintergrund ein zweifaches Erkenntnisinteresse: Zum einen macht sie sich – im Sinne variationslinguistischer Grundlagenforschung – zur Aufgabe, den Umgang mit Varianten in der germanistischen Grammatikographie zwischen 1958 und 2015 mit den Mitteln computergestützter qualitativer Datenanalyse zu skizzieren. Zum anderen wird versucht, anhand dieser Wissensbestände intertextuelle Verweisstrukturen zu rekonstruieren, um so einen tieferen Einblick in die historische Genese dessen zu gewinnen, was als deutscher Grammatikkodex bezeichnet werden könnte. Dabei deutet sich nicht zuletzt an, dass einzelne englischsprachige Publikationen stark am grammatikographischen Diskurs partizipieren und das engmaschige Textnetz gerade zu Beginn des Untersuchungszeitraums von prägnanten Impulsen aus dem fremdsprachlichen Raum profitiert.
Die FSME-Impfung hat zu einer Reduktion der Krankheitsfälle in Endemiegebieten geführt. Wenn spezielle Situationen berücksichtigt werden müssen, fehlen systematische Untersuchungen. Diese Übersichtsarbeit soll die Datenlage zu Immunogenität und Sicherheit der FSME-Impfung bei Patienten mit verändertem Immunsystem zusammenfassen.
Es wurde nach PRISMA-Leitfaden vorgegangen. Literaturrecherche in 06/2017 (Aktualisierung 09/2019) in der Datenbank Medline/ PubMed. Die Ergebnisse wurden zu den speziellen Populationen und den o.g. Endpunkten zusammengefasst: ältere Menschen, thymektomierte Kinder, Schwangere, Transplantierte, Autoimmunerkrankte, Patienten mit angeborenen und erworbenen Immundefekten. Zu älteren Menschen wurden 36 Studien eingeschlossen. Diese konnten quantitativ und qualitativ zusammengefasst werden. Zu den anderen Populationen fanden sich nur wenige Studien, welche überwiegend keine allgemeingültigen Schlussfolgerungen zuließen. Schwangere: Keine Studien.
Verzerrungspotential des Reviews besteht auf Ebene der Studien, der Zielkriterien und der Übersichtsarbeit. Dieser Review wurde nicht angemeldet. Es gab keine finanzielle Unterstützung.
The importance of proactive and timely prediction of critical events is steadily increasing, whether in the manufacturing industry or in private life. In the past, machines in the manufacturing industry were often maintained based on a regular schedule or threshold violations, which is no longer competitive as it causes unnecessary costs and downtime. In contrast, the predictions of critical events in everyday life are often much more concealed and hardly noticeable to the private individual, unless the critical event occurs. For instance, our electricity provider has to ensure that we, as end users, are always supplied with sufficient electricity, or our favorite streaming service has to guarantee that we can watch our favorite series without interruptions. For this purpose, they have to constantly analyze what the current situation is, how it will develop in the near future, and how they have to react in order to cope with future conditions without causing power outages or video stalling.
In order to analyze the performance of a system, monitoring mechanisms are often integrated to observe characteristics that describe the workload and the state of the system and its environment. Reactive systems typically employ thresholds, utility functions, or models to determine the current state of the system. However, such reactive systems cannot proactively estimate future events, but only as they occur. In the case of critical events, reactive determination of the current system state is futile, whereas a proactive system could have predicted this event in advance and enabled timely countermeasures. To achieve proactivity, the system requires estimates of future system states. Given the gap between design time and runtime, it is typically not possible to use expert knowledge to a priori model all situations a system might encounter at runtime. Therefore, prediction methods must be integrated into the system. Depending on the available monitoring data and the complexity of the prediction task, either time series forecasting in combination with thresholding or more sophisticated machine and deep learning models have to be trained.
Although numerous forecasting methods have been proposed in the literature, these methods have their advantages and disadvantages depending on the characteristics of the time series under consideration. Therefore, expert knowledge is required to decide which forecasting method to choose. However, since the time series observed at runtime cannot be known at design time, such expert knowledge cannot be implemented in the system. In addition to selecting an appropriate forecasting method, several time series preprocessing steps are required to achieve satisfactory forecasting accuracy. In the literature, this preprocessing is often done manually, which is not practical for autonomous computing systems, such as Self-Aware Computing Systems. Several approaches have also been presented in the literature for predicting critical events based on multivariate monitoring data using machine and deep learning. However, these approaches are typically highly domain-specific, such as financial failures, bearing failures, or product failures. Therefore, they require in-depth expert knowledge. For this reason, these approaches cannot be fully automated and are not transferable to other use cases. Thus, the literature lacks generalizable end-to-end workflows for modeling, detecting, and predicting failures that require only little expert knowledge.
To overcome these shortcomings, this thesis presents a system model for meta-self-aware prediction of critical events based on the LRA-M loop of Self-Aware Computing Systems. Building upon this system model, this thesis provides six further contributions to critical event prediction. While the first two contributions address critical event prediction based on univariate data via time series forecasting, the three subsequent contributions address critical event prediction for multivariate monitoring data using machine and deep learning algorithms. Finally, the last contribution addresses the update procedure of the system model. Specifically, the seven main contributions of this thesis can be summarized as follows:
First, we present a system model for meta self-aware prediction of critical events. To handle both univariate and multivariate monitoring data, it offers univariate time series forecasting for use cases where a single observed variable is representative of the state of the system, and machine learning algorithms combined with various preprocessing techniques for use cases where a large number of variables are observed to characterize the system’s state. However, the two different modeling alternatives are not disjoint, as univariate time series forecasts can also be included to estimate future monitoring data as additional input to the machine learning models. Finally, a feedback loop is incorporated to monitor the achieved prediction quality and trigger model updates.
We propose a novel hybrid time series forecasting method for univariate, seasonal time series, called Telescope. To this end, Telescope automatically preprocesses the time series, performs a kind of divide-and-conquer technique to split the time series into multiple components, and derives additional categorical information. It then forecasts the components and categorical information separately using a specific state-of-the-art method for each component. Finally, Telescope recombines the individual predictions. As Telescope performs both preprocessing and forecasting automatically, it represents a complete end-to-end approach to univariate seasonal time series forecasting. Experimental results show that Telescope achieves enhanced forecast accuracy, more reliable forecasts, and a substantial speedup. Furthermore, we apply Telescope to the scenario of predicting critical events for virtual machine auto-scaling. Here, results show that Telescope considerably reduces the average response time and significantly reduces the number of service level objective violations.
For the automatic selection of a suitable forecasting method, we introduce two frameworks for recommending forecasting methods. The first framework extracts various time series characteristics to learn the relationship between them and forecast accuracy. In contrast, the other framework divides the historical observations into internal training and validation parts to estimate the most appropriate forecasting method. Moreover, this framework also includes time series preprocessing steps. Comparisons between the proposed forecasting method recommendation frameworks and the individual state-of-the-art forecasting methods and the state-of-the-art forecasting method recommendation approach show that the proposed frameworks considerably improve the forecast accuracy.
With regard to multivariate monitoring data, we first present an end-to-end workflow to detect critical events in technical systems in the form of anomalous machine states. The end-to-end design includes raw data processing, phase segmentation, data resampling, feature extraction, and machine tool anomaly detection. In addition, the workflow does not rely on profound domain knowledge or specific monitoring variables, but merely assumes standard machine monitoring data. We evaluate the end-to-end workflow using data from a real CNC machine. The results indicate that conventional frequency analysis does not detect the critical machine conditions well, while our workflow detects the critical events very well with an F1-score of almost 91%.
To predict critical events rather than merely detecting them, we compare different modeling alternatives for critical event prediction in the use case of time-to-failure prediction of hard disk drives. Given that failure records are typically significantly less frequent than instances representing the normal state, we employ different oversampling strategies. Next, we compare the prediction quality of binary class modeling with downscaled multi-class modeling. Furthermore, we integrate univariate time series forecasting into the feature generation process to estimate future monitoring data. Finally, we model the time-to-failure using not only classification models but also regression models. The results suggest that multi-class modeling provides the overall best prediction quality with respect to practical requirements. In addition, we prove that forecasting the features of the prediction model significantly improves the critical event prediction quality.
We propose an end-to-end workflow for predicting critical events of industrial machines. Again, this approach does not rely on expert knowledge except for the definition of monitoring data, and therefore represents a generalizable workflow for predicting critical events of industrial machines. The workflow includes feature extraction, feature handling, target class mapping, and model learning with integrated hyperparameter tuning via a grid-search technique. Drawing on the result of the previous contribution, the workflow models the time-to-failure prediction in terms of multiple classes, where we compare different labeling strategies for multi-class classification. The evaluation using real-world production data of an industrial press demonstrates that the workflow is capable of predicting six different time-to-failure windows with a macro F1-score of 90%. When scaling the time-to-failure classes down to a binary prediction of critical events, the F1-score increases to above 98%.
Finally, we present four update triggers to assess when critical event prediction models should be re-trained during on-line application. Such re-training is required, for instance, due to concept drift. The update triggers introduced in this thesis take into account the elapsed time since the last update, the prediction quality achieved on the current test data, and the prediction quality achieved on the preceding test data. We compare the different update strategies with each other and with the static baseline model. The results demonstrate the necessity of model updates during on-line application and suggest that the update triggers that consider both the prediction quality of the current and preceding test data achieve the best trade-off between prediction quality and number of updates required.
We are convinced that the contributions of this thesis constitute significant impulses for the academic research community as well as for practitioners. First of all, to the best of our knowledge, we are the first to propose a fully automated, end-to-end, hybrid, component-based forecasting method for seasonal time series that also includes time series preprocessing. Due to the combination of reliably high forecast accuracy and reliably low time-to-result, it offers many new opportunities in applications requiring accurate forecasts within a fixed time period in order to take timely countermeasures. In addition, the promising results of the forecasting method recommendation systems provide new opportunities to enhance forecasting performance for all types of time series, not just seasonal ones. Furthermore, we are the first to expose the deficiencies of the prior state-of-the-art forecasting method recommendation system.
Concerning the contributions to critical event prediction based on multivariate monitoring data, we have already collaborated closely with industrial partners, which supports the practical relevance of the contributions of this thesis. The automated end-to-end design of the proposed workflows that do not demand profound domain or expert knowledge represents a milestone in bridging the gap between academic theory and industrial application. Finally, the workflow for predicting critical events in industrial machines is currently being operationalized in a real production system, underscoring the practical impact of this thesis.
Despite belonging to the best described patterns in ecology, the mechanisms driving biodiversity along broad-scale climatic gradients, like the latitudinal gradient in diversity, remain poorly understood. Because of their high biodiversity, restricted spatial ranges, the continuous change in abiotic factors with altitude and their worldwide occurrence, mountains constitute ideal study systems to elucidate the predictors of global biodiversity patterns. However, mountain ecosystems are increasingly threatened by human land use and climate change. Since the consequences of such alterations on mountainous biodiversity and related ecosystem services are hardly known, research along elevational gradients is also of utmost importance from a conservation point of view. In addition to classical biodiversity research focusing on taxonomy, the significance of studying functional traits and their prominence in biodiversity ecosystem functioning (BEF) relationships is increasingly acknowledged. In this dissertation, I explore the patterns and drivers of mammal and dung beetle diversity along elevational and land use gradients on Mt. Kilimanjaro, Tanzania. Furthermore, I investigate the predictors of dung decomposition by dung beetles under different extinction scenarios.
Mammals are not only charismatic, they also fulfil important roles in ecosystems. They provide important ecosystem services such as seed dispersal and nutrient cycling by turning over high amounts of biomass. In chapter II, I show that mammal diversity and community biomass both exhibited a unimodal distribution with elevation on Mt.Kilimanjaro and were mainly impacted by primary productivity, a measure of the total food abundance, and the protection status of study plots. Due to their large size and endothermy, mammals, in contrast to most arthopods, are theoretically predicted to be limited by food availability. My results are in concordance with this prediction. The significantly higher diversity and biomass in the Kilimanjaro National Park and in other conservation areas underscore the important role of habitat protection is vital for the conservation of large mammal biodiversity on tropical mountains.
Dung beetles are dependent on mammals since they rely upon mammalian dung as a food and nesting resource. Dung beetles are also important ecosystem service providers: they play an important role in nutrient cycling, bioturbation, secondary seed dispersal and parasite suppression. In chapter III, I show that dung beetle diversity declined with elevation while dung beetle abundance followed a hump-shaped pattern along the elevational gradient. In contrast to mammals, dung beetle diversity was primarily predicted by temperature. Despite my attempt to accurately quantifiy mammalian dung resources by calculating mammalian defecation rates, I did not find an influence of dung resource availability on dung beetle richness. Instead, higher temperature translated into higher dung beetle diversity.
Apart from being important ecosystem service providers, dung beetles are also model organisms for BEF studies since they rely on a resource which can be quantified easily. In chapter IV, I explore dung decomposition by dung beetles along the elevational gradient by means of an exclosure experiment in the presence of the whole dung beetle community, in the absence of large dung beetles and without any dung beetles. I show that dung decomposition was the highest when the dung could be decomposed by the whole dung beetle community, while dung decomposition was significantly reduced in the sole presence of small dung beetles and the lowest in the absence of dung beetles. Furthermore, I demonstrate that the drivers of dung decomposition were depend on the intactness of the dung beetle community. While body size was the most important driver in the presence of the whole dung beetle community, species richness gained in importance when large dung beetles were excluded. In the most perturbed state of the system with no dung beetles present, temperature was the sole driver of dung decomposition. In conclusion, abiotic drivers become more important predictors of ecosystem services the more the study system is disturbed.
In this dissertation, I exemplify that the drivers of diversity along broad-scale climatic gradients on Mt. Kilimanjaro depend on the thermoregulatory strategy of organisms. While mammal diversity was mainly impacted by food/energy resources, dung beetle diversity was mainly limited by temperature. I also demonstrate the importance of protected areas for the preservation of large mammal biodiversity. Furthermore, I show that large dung beetles were disproportionately important for dung decomposition as dung decomposition significantly decreased when large dung beetles were excluded. As regards land use, I did not detect an overall effect on dung beetle and mammal diversity nor on dung beetle-mediated dung decomposition. However, for the most specialised mammal trophic guilds and dung beetle functional groups, negative land use effects were already visible. Even though the current moderate levels of land use on Mt. Kilimanjaro can sustain high levels of biodiversity, the pressure of the human population on Mt. Kilimanjaro is increasing and further land use intensification poses a great threat to biodiversity. In synergy wih land use, climate change is jeopardizing current patterns and levels of biodiversity with the potential to displace communities, which may have unpredictable consequences for ecosystem service provisioning in the future.
Anthropogenic activities are causing air pollution. Amongst air pollutants, tropospheric ozone is a major threat to human health and ecosystem functioning. In this dissertation, I present three studies that aimed at increasing our knowledge on how plant exposure to ozone affects its reproduction and its interactions with insect herbivores and pollinators.
For this purpose, a new fumigation system was built and placed in a greenhouse. The annual plant Sinapis arvensis (wild mustard) was used as the model plant.
Plants were exposed to either 0 ppb (control) or 120 ppb of ozone, for variable amounts of time and at different points of their life cycle. After fumigation, plants were exposed to herbivores or pollinators in the greenhouse, or to both groups of insects in the field.
My research shows that ozone affected reproductive performance differently, depending on the timing of exposure: plants exposed at earlier ages had their reproductive fitness increased, while plants exposed later in their life cycle showed a tendency for reduced reproductive fitness. Plant phenology was a key factor influencing reproductive fitness: ozone accelerated flowering and increased the number of flowers produced by plants exposed at early ages, while plants exposed to ozone at later ages tended to have fewer flowers. On the other hand, the ozone-mediated changes in plant-insect interactions had little impact on plant reproductive success.
The strongest effect of ozone on plant-pollinator interactions was the change in the number of flower visits received per plant, which was strongly linked to the number of open flowers. This means that, as a rule, exposure of plants to ozone early in the life cycle resulted in a higher number of pollinator visits, while exposure later in the life cycle resulted in fewer flower visits by potential pollinators. An exception was observed: the higher number of visits performed by large syrphid flies to young ozone-exposed plants than to the respective control plants went beyond the increase in the number of open flowers in those plants. Also, honeybees spent more time per flower in plants exposed to ozone than on control plants, while other pollinators spent similar amounts of time in control and ozone-exposed plants. This guild-dependent preference for ozone-exposed plants may be due to species-specific preferences related to changes in the quality and quantity of floral rewards.
In the field, ozone-exposed plants showed only a tendency for increased colonization by sucking herbivores and slightly more damage by chewing herbivores than control plants. On the other hand, in the greenhouse experiment, Pieris brassicae butterflies preferred control plants over ozone-exposed plants as oviposition sites. Eggs laid on ozone-exposed plants took longer to hatch, but the chances of survival were higher. Caterpillars performed better in control plants than in ozone-exposed plants, particularly when the temperature was high.
Most of the described effects were dependent on the duration and timing of the ozone exposure and the observed temperature, with the strongest effects being observed for longer exposures and higher temperatures. Furthermore, the timing of exposure altered the direction of the effects.
The expected climate change provides ideal conditions for further increases in tropospheric ozone concentrations, therefore for stronger effects on plants and plant-insect interactions. Acceleration of flowering caused by plant exposure to ozone may put plant-pollinator interactions at risk by promoting desynchronization between plant and pollinator activities. Reduced performance of caterpillars feeding on ozone-exposed plants may weaken herbivore populations. On the other hand, the increased plant reproduction that results from exposing young plants to ozone may be a source of good news in the field of horticulture, when similar results would be achieved in high-value crops. However, plant response to ozone is highly species-specific. In fact, Sinapis arvensis is considered a weed and the advantage conferred by ozone exposure may increase its competitiveness, with negative consequences for crops or plant communities in general. Overall, plant exposure to ozone might constitute a threat for the balance of natural and agro-ecosystems.
In dieser Arbeit wird mathematisches Papierfalten und speziell 1-fach-Origami im universitären Kontext untersucht. Die Arbeit besteht aus drei Teilen.
Der erste Teil ist im Wesentlichen der Sachanalyse des 1-fach-Origami gewidmet. Im ersten Kapitel gehen wir auf die geschichtliche Einordnung des 1-fach-Origami, betrachten axiomatische Grundlagen und diskutieren, wie das Axiomatisieren von 1-fach-Origami zum Verständnis des Axiomenbegriffs beitragen könnte. Im zweiten Kapitel schildern wir das Design der zugehörigen explorativen Studie, beschreiben unsere Forschungsziele und -fragen. Im dritten Kapitel wird 1-fach-Origami mathematisiert, definiert und eingehend untersucht.
Der zweite Teil beschäftigt sich mit den von uns gestalteten und durchgeführten Kursen »Axiomatisieren lernen mit Papierfalten«. Im vierten Kapitel beschreiben wir die Lehrmethodik und die Gestaltung der Kurse, das fünfte Kapitel enthält ein Exzerpt der Kurse.
Im dritten Teil werden die zugehörigen Tests beschrieben. Im sechsten Kapitel erläutern wir das Design der Tests sowie die Testmethodik. Im siebten Kapitel findet die Auswertung ebendieser Tests statt.
Das unechte Unterlassungsdelikt gilt seit langem als das ”dunkelste Ka-
pitel“ in der Dogmatik des Allgemeinen Teils des StGB. Gesetzlicher
Anhaltspunkt der Strafbarkeit ist allein, dass der Unterlassende ”recht-
lich dafür einzustehen hat, daß der Erfolg nicht eintritt“, § 13 Abs. 1
StGB, also Garant ist. Innerhalb der herkommlich diskutierten Garan-
tenstellungen ist die aus Ingerenz besonders umstritten.
Hat derjenige, der eine Gefahr für fremde Rechtsgüter geschaffen hat,
eine Garantenstellung im Hinblick auf dieses schadensträchtige Gesche-
hen, sodass er gemäß § 13 Abs. 1 StGB für das Unterlassen der Erfolgs-
abwendung gleich einem Begehungstäter bestraft wird? Welche rechtli-
chen Anforderungen wären in diesem Fall an das die Garantenstellung
begründende Handeln zu stellen? Die regelmäßig diskutierten Alternati-
ven sind, ob nur pflichtwidriges Tun eine Ingerenzgarantenstellung nach
sich zieht oder auch rechtmäßiges (”qualifiziert riskantes“) Vorverhalten
genügt.
Die vorliegende Arbeit kommt zu dem Ergebnis, dass sich das Einste-
henmüssen des Ingerenten auf der Grundlage des geltenden Rechts be-
gründen lässt. Hinsichtlich der Voraussetzungen der Garantenstellung
will sie aufzeigen, dass es nicht auf die aus der unsicheren Entschei-
dungsperspektive ex ante zu treffende Verhaltensbewertung ankommen
kann. Vorgeschlagen wird stattdessen eine vermittelnde Lösung, die die
Bewertungsgrundlage mit einem Maximum an Objektivität versieht.