Refine
Has Fulltext
- yes (15)
Is part of the Bibliography
- yes (15)
Year of publication
- 2022 (15) (remove)
Document Type
- Doctoral Thesis (12)
- Complete part of issue (3)
Keywords
- University (3)
- Universität (3)
- Wuerzburg (3)
- Wurzburg (3)
- Würzburg (3)
- Cloud Computing (2)
- 1-fach-Origami (1)
- ADHD (1)
- ADHS (1)
- Air pollution (1)
Institute
- Institut für Informatik (3)
- Universität - Fakultätsübergreifend (3)
- Theodor-Boveri-Institut für Biowissenschaften (2)
- Institut für Mathematik (1)
- Institut für Strafrecht und Kriminologie (1)
- Institut für deutsche Philologie (1)
- Kinderklinik und Poliklinik (1)
- Klinik und Poliklinik für Kinder- und Jugendpsychiatrie, Psychosomatik und Psychotherapie (1)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (1)
- Klinik und Polikliniken für Zahn-, Mund- und Kieferkrankheiten (1)
Sonstige beteiligte Institutionen
Ästhetische Restaurationsverfahren sind in den meisten Zahnarztpraxen mittlerweile Routine. Zielsetzung: Diese Studie ging der Frage nach, inwieweit die Aushärtung von Kompositen unter Keramikrestaurationen möglich ist. Material und Methoden: Anhand von 426 mit Tetric Ceram® und 102 mit Variolink® hergestellten Proben wurden folgende Abhängigkeiten berücksichtigt: Lichtgerät und Belichtungsdauer, Schichtstärke und Farbe der Keramik, Härtemodus des Composits - im vorliegenden Fall ob lichthärtendes oder dualhärtendes Composit - und schließlich beim dualhärtenden Composit, der Einfluss einer kurzen Wartezeit zwischen Anmischen der Paste und Lichtpolymerisation. Diskussion: Um eine ausreichende Aushärtung und gute Haftfestigkeit von lichthärtenden Befestigungscomposites unter keramischen Restaurationen zu gewährleisten, ist eine ungehinderte Belichtung mit einem Polymerisationslicht für mindestens 25 Sekunden erforderlich, vorzugsweise aus sechs Richtungen, insbesondere dann, wenn die Keramik dunkler eingefärbt ist. Bei der Bestrahlung von Keramikschichten mit einer Wandstärke ≥ 2 mm werden dualhärtende Komposite und längere Belichtungszeiten empfohlen.
Die FSME-Impfung hat zu einer Reduktion der Krankheitsfälle in Endemiegebieten geführt. Wenn spezielle Situationen berücksichtigt werden müssen, fehlen systematische Untersuchungen. Diese Übersichtsarbeit soll die Datenlage zu Immunogenität und Sicherheit der FSME-Impfung bei Patienten mit verändertem Immunsystem zusammenfassen.
Es wurde nach PRISMA-Leitfaden vorgegangen. Literaturrecherche in 06/2017 (Aktualisierung 09/2019) in der Datenbank Medline/ PubMed. Die Ergebnisse wurden zu den speziellen Populationen und den o.g. Endpunkten zusammengefasst: ältere Menschen, thymektomierte Kinder, Schwangere, Transplantierte, Autoimmunerkrankte, Patienten mit angeborenen und erworbenen Immundefekten. Zu älteren Menschen wurden 36 Studien eingeschlossen. Diese konnten quantitativ und qualitativ zusammengefasst werden. Zu den anderen Populationen fanden sich nur wenige Studien, welche überwiegend keine allgemeingültigen Schlussfolgerungen zuließen. Schwangere: Keine Studien.
Verzerrungspotential des Reviews besteht auf Ebene der Studien, der Zielkriterien und der Übersichtsarbeit. Dieser Review wurde nicht angemeldet. Es gab keine finanzielle Unterstützung.
Chronischer Stress hat negative Folgen, die sich im Verhalten und auf neuronaler Ebene äußern können. Als besonders stressempfindlich gelten die Neurone der dritten Region des hippocampalen Ammonshorns CA3. Sie reagieren auch im bereits ausgereiften Zustand noch sehr sensibel auf äußere Einflüsse, was als neuronale Plastizität bezeichnet wird. Sie erfahren unter anderem durch Stress und Serotonin morphologische und funktionelle Veränderungen. Serotonin-Transporter wahren das Serotonin-Gleichgewicht, indem sie dessen Wirkung schließlich durch Wiederaufnahme in die Zellen beenden. Polymorphismen, also verschiedene Gen-Varianten, bedingen Unterschiede in der Zahl der verfügbaren Transporter. Dieses Wechselspiel zwischen Gen-Varianten des Serotonin-Transporters und Stress wurde an Serotonin-Transporter-Knockout-Mäusen untersucht. Einige Mäuse erfuhren bereits früh im Leben Stress, der entweder anhielt oder im späteren Leben positiven Erfahrungen wich; weitere Mäuse hingegen machten in frühen Lebensabschnitten positive Erfahrungen, die sich später entweder fortsetzten oder durch Stresserfahrungen ersetzt wurden. Nach Durchführung von Verhaltenstests wurde zudem in deren Golgi-imprägnierten Gehirnen die Morphologie der Apikaldendriten von CA3-Kurzschaft-Pyramidenzellen lichtmikroskopisch untersucht und in 3D-Computermodellen abgebildet. Aufgrund regionaler Eigenheiten innerhalb von CA3 wurden diese Neurone verschiedenen Subpopulationen zugeordnet. Tatsächlich konnten mithilfe der Kombination aus vier verschiedenen Lebensgeschichten und drei unterschiedlichen Serotonin-Transporter-Genotypen Unterschiede in der Morphologie der CA3-Pyramidenzellen zwischen den einzelnen Gruppen festgestellt werden. Ohne Stresserleben zeigten sich die Neurone meist signifikant verzweigter; nach Stresserleben zeigten sich, zumindest in einer bestimmten Subpopulation, signifikante Verminderungen der Spines. Mäuse mit zwei oder einem wildtypischen Serotonin-Transporter-Allel und ausschließlich späten aversiven Erfahrungen hatten signifikant längere Apikaldendriten als die Referenz mit zwei wildtypischen Allelen und ohne Stresserfahrung; homozygot Serotonin-Transporter-defiziente Mäuse der gleichen Lebensgeschichte hatten zur Referenz signifikant verkürzte Apikaldendriten. Diese Ergebnisse lassen vermuten, dass Stress in Verbindung mit genetisch bedingt geringen Mengen des Serotonin-Transporters durchaus eine erhöhte Vulnerabilität für psychische Erkrankungen bedingen könnte, aber dass ausschließlich späte Stresserfahrungen bei höheren Mengen des Serotonin-Transporters auch protektiv wirken könnten.
Despite belonging to the best described patterns in ecology, the mechanisms driving biodiversity along broad-scale climatic gradients, like the latitudinal gradient in diversity, remain poorly understood. Because of their high biodiversity, restricted spatial ranges, the continuous change in abiotic factors with altitude and their worldwide occurrence, mountains constitute ideal study systems to elucidate the predictors of global biodiversity patterns. However, mountain ecosystems are increasingly threatened by human land use and climate change. Since the consequences of such alterations on mountainous biodiversity and related ecosystem services are hardly known, research along elevational gradients is also of utmost importance from a conservation point of view. In addition to classical biodiversity research focusing on taxonomy, the significance of studying functional traits and their prominence in biodiversity ecosystem functioning (BEF) relationships is increasingly acknowledged. In this dissertation, I explore the patterns and drivers of mammal and dung beetle diversity along elevational and land use gradients on Mt. Kilimanjaro, Tanzania. Furthermore, I investigate the predictors of dung decomposition by dung beetles under different extinction scenarios.
Mammals are not only charismatic, they also fulfil important roles in ecosystems. They provide important ecosystem services such as seed dispersal and nutrient cycling by turning over high amounts of biomass. In chapter II, I show that mammal diversity and community biomass both exhibited a unimodal distribution with elevation on Mt.Kilimanjaro and were mainly impacted by primary productivity, a measure of the total food abundance, and the protection status of study plots. Due to their large size and endothermy, mammals, in contrast to most arthopods, are theoretically predicted to be limited by food availability. My results are in concordance with this prediction. The significantly higher diversity and biomass in the Kilimanjaro National Park and in other conservation areas underscore the important role of habitat protection is vital for the conservation of large mammal biodiversity on tropical mountains.
Dung beetles are dependent on mammals since they rely upon mammalian dung as a food and nesting resource. Dung beetles are also important ecosystem service providers: they play an important role in nutrient cycling, bioturbation, secondary seed dispersal and parasite suppression. In chapter III, I show that dung beetle diversity declined with elevation while dung beetle abundance followed a hump-shaped pattern along the elevational gradient. In contrast to mammals, dung beetle diversity was primarily predicted by temperature. Despite my attempt to accurately quantifiy mammalian dung resources by calculating mammalian defecation rates, I did not find an influence of dung resource availability on dung beetle richness. Instead, higher temperature translated into higher dung beetle diversity.
Apart from being important ecosystem service providers, dung beetles are also model organisms for BEF studies since they rely on a resource which can be quantified easily. In chapter IV, I explore dung decomposition by dung beetles along the elevational gradient by means of an exclosure experiment in the presence of the whole dung beetle community, in the absence of large dung beetles and without any dung beetles. I show that dung decomposition was the highest when the dung could be decomposed by the whole dung beetle community, while dung decomposition was significantly reduced in the sole presence of small dung beetles and the lowest in the absence of dung beetles. Furthermore, I demonstrate that the drivers of dung decomposition were depend on the intactness of the dung beetle community. While body size was the most important driver in the presence of the whole dung beetle community, species richness gained in importance when large dung beetles were excluded. In the most perturbed state of the system with no dung beetles present, temperature was the sole driver of dung decomposition. In conclusion, abiotic drivers become more important predictors of ecosystem services the more the study system is disturbed.
In this dissertation, I exemplify that the drivers of diversity along broad-scale climatic gradients on Mt. Kilimanjaro depend on the thermoregulatory strategy of organisms. While mammal diversity was mainly impacted by food/energy resources, dung beetle diversity was mainly limited by temperature. I also demonstrate the importance of protected areas for the preservation of large mammal biodiversity. Furthermore, I show that large dung beetles were disproportionately important for dung decomposition as dung decomposition significantly decreased when large dung beetles were excluded. As regards land use, I did not detect an overall effect on dung beetle and mammal diversity nor on dung beetle-mediated dung decomposition. However, for the most specialised mammal trophic guilds and dung beetle functional groups, negative land use effects were already visible. Even though the current moderate levels of land use on Mt. Kilimanjaro can sustain high levels of biodiversity, the pressure of the human population on Mt. Kilimanjaro is increasing and further land use intensification poses a great threat to biodiversity. In synergy wih land use, climate change is jeopardizing current patterns and levels of biodiversity with the potential to displace communities, which may have unpredictable consequences for ecosystem service provisioning in the future.
The importance of proactive and timely prediction of critical events is steadily increasing, whether in the manufacturing industry or in private life. In the past, machines in the manufacturing industry were often maintained based on a regular schedule or threshold violations, which is no longer competitive as it causes unnecessary costs and downtime. In contrast, the predictions of critical events in everyday life are often much more concealed and hardly noticeable to the private individual, unless the critical event occurs. For instance, our electricity provider has to ensure that we, as end users, are always supplied with sufficient electricity, or our favorite streaming service has to guarantee that we can watch our favorite series without interruptions. For this purpose, they have to constantly analyze what the current situation is, how it will develop in the near future, and how they have to react in order to cope with future conditions without causing power outages or video stalling.
In order to analyze the performance of a system, monitoring mechanisms are often integrated to observe characteristics that describe the workload and the state of the system and its environment. Reactive systems typically employ thresholds, utility functions, or models to determine the current state of the system. However, such reactive systems cannot proactively estimate future events, but only as they occur. In the case of critical events, reactive determination of the current system state is futile, whereas a proactive system could have predicted this event in advance and enabled timely countermeasures. To achieve proactivity, the system requires estimates of future system states. Given the gap between design time and runtime, it is typically not possible to use expert knowledge to a priori model all situations a system might encounter at runtime. Therefore, prediction methods must be integrated into the system. Depending on the available monitoring data and the complexity of the prediction task, either time series forecasting in combination with thresholding or more sophisticated machine and deep learning models have to be trained.
Although numerous forecasting methods have been proposed in the literature, these methods have their advantages and disadvantages depending on the characteristics of the time series under consideration. Therefore, expert knowledge is required to decide which forecasting method to choose. However, since the time series observed at runtime cannot be known at design time, such expert knowledge cannot be implemented in the system. In addition to selecting an appropriate forecasting method, several time series preprocessing steps are required to achieve satisfactory forecasting accuracy. In the literature, this preprocessing is often done manually, which is not practical for autonomous computing systems, such as Self-Aware Computing Systems. Several approaches have also been presented in the literature for predicting critical events based on multivariate monitoring data using machine and deep learning. However, these approaches are typically highly domain-specific, such as financial failures, bearing failures, or product failures. Therefore, they require in-depth expert knowledge. For this reason, these approaches cannot be fully automated and are not transferable to other use cases. Thus, the literature lacks generalizable end-to-end workflows for modeling, detecting, and predicting failures that require only little expert knowledge.
To overcome these shortcomings, this thesis presents a system model for meta-self-aware prediction of critical events based on the LRA-M loop of Self-Aware Computing Systems. Building upon this system model, this thesis provides six further contributions to critical event prediction. While the first two contributions address critical event prediction based on univariate data via time series forecasting, the three subsequent contributions address critical event prediction for multivariate monitoring data using machine and deep learning algorithms. Finally, the last contribution addresses the update procedure of the system model. Specifically, the seven main contributions of this thesis can be summarized as follows:
First, we present a system model for meta self-aware prediction of critical events. To handle both univariate and multivariate monitoring data, it offers univariate time series forecasting for use cases where a single observed variable is representative of the state of the system, and machine learning algorithms combined with various preprocessing techniques for use cases where a large number of variables are observed to characterize the system’s state. However, the two different modeling alternatives are not disjoint, as univariate time series forecasts can also be included to estimate future monitoring data as additional input to the machine learning models. Finally, a feedback loop is incorporated to monitor the achieved prediction quality and trigger model updates.
We propose a novel hybrid time series forecasting method for univariate, seasonal time series, called Telescope. To this end, Telescope automatically preprocesses the time series, performs a kind of divide-and-conquer technique to split the time series into multiple components, and derives additional categorical information. It then forecasts the components and categorical information separately using a specific state-of-the-art method for each component. Finally, Telescope recombines the individual predictions. As Telescope performs both preprocessing and forecasting automatically, it represents a complete end-to-end approach to univariate seasonal time series forecasting. Experimental results show that Telescope achieves enhanced forecast accuracy, more reliable forecasts, and a substantial speedup. Furthermore, we apply Telescope to the scenario of predicting critical events for virtual machine auto-scaling. Here, results show that Telescope considerably reduces the average response time and significantly reduces the number of service level objective violations.
For the automatic selection of a suitable forecasting method, we introduce two frameworks for recommending forecasting methods. The first framework extracts various time series characteristics to learn the relationship between them and forecast accuracy. In contrast, the other framework divides the historical observations into internal training and validation parts to estimate the most appropriate forecasting method. Moreover, this framework also includes time series preprocessing steps. Comparisons between the proposed forecasting method recommendation frameworks and the individual state-of-the-art forecasting methods and the state-of-the-art forecasting method recommendation approach show that the proposed frameworks considerably improve the forecast accuracy.
With regard to multivariate monitoring data, we first present an end-to-end workflow to detect critical events in technical systems in the form of anomalous machine states. The end-to-end design includes raw data processing, phase segmentation, data resampling, feature extraction, and machine tool anomaly detection. In addition, the workflow does not rely on profound domain knowledge or specific monitoring variables, but merely assumes standard machine monitoring data. We evaluate the end-to-end workflow using data from a real CNC machine. The results indicate that conventional frequency analysis does not detect the critical machine conditions well, while our workflow detects the critical events very well with an F1-score of almost 91%.
To predict critical events rather than merely detecting them, we compare different modeling alternatives for critical event prediction in the use case of time-to-failure prediction of hard disk drives. Given that failure records are typically significantly less frequent than instances representing the normal state, we employ different oversampling strategies. Next, we compare the prediction quality of binary class modeling with downscaled multi-class modeling. Furthermore, we integrate univariate time series forecasting into the feature generation process to estimate future monitoring data. Finally, we model the time-to-failure using not only classification models but also regression models. The results suggest that multi-class modeling provides the overall best prediction quality with respect to practical requirements. In addition, we prove that forecasting the features of the prediction model significantly improves the critical event prediction quality.
We propose an end-to-end workflow for predicting critical events of industrial machines. Again, this approach does not rely on expert knowledge except for the definition of monitoring data, and therefore represents a generalizable workflow for predicting critical events of industrial machines. The workflow includes feature extraction, feature handling, target class mapping, and model learning with integrated hyperparameter tuning via a grid-search technique. Drawing on the result of the previous contribution, the workflow models the time-to-failure prediction in terms of multiple classes, where we compare different labeling strategies for multi-class classification. The evaluation using real-world production data of an industrial press demonstrates that the workflow is capable of predicting six different time-to-failure windows with a macro F1-score of 90%. When scaling the time-to-failure classes down to a binary prediction of critical events, the F1-score increases to above 98%.
Finally, we present four update triggers to assess when critical event prediction models should be re-trained during on-line application. Such re-training is required, for instance, due to concept drift. The update triggers introduced in this thesis take into account the elapsed time since the last update, the prediction quality achieved on the current test data, and the prediction quality achieved on the preceding test data. We compare the different update strategies with each other and with the static baseline model. The results demonstrate the necessity of model updates during on-line application and suggest that the update triggers that consider both the prediction quality of the current and preceding test data achieve the best trade-off between prediction quality and number of updates required.
We are convinced that the contributions of this thesis constitute significant impulses for the academic research community as well as for practitioners. First of all, to the best of our knowledge, we are the first to propose a fully automated, end-to-end, hybrid, component-based forecasting method for seasonal time series that also includes time series preprocessing. Due to the combination of reliably high forecast accuracy and reliably low time-to-result, it offers many new opportunities in applications requiring accurate forecasts within a fixed time period in order to take timely countermeasures. In addition, the promising results of the forecasting method recommendation systems provide new opportunities to enhance forecasting performance for all types of time series, not just seasonal ones. Furthermore, we are the first to expose the deficiencies of the prior state-of-the-art forecasting method recommendation system.
Concerning the contributions to critical event prediction based on multivariate monitoring data, we have already collaborated closely with industrial partners, which supports the practical relevance of the contributions of this thesis. The automated end-to-end design of the proposed workflows that do not demand profound domain or expert knowledge represents a milestone in bridging the gap between academic theory and industrial application. Finally, the workflow for predicting critical events in industrial machines is currently being operationalized in a real production system, underscoring the practical impact of this thesis.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
Der Umstand, dass Grammatikschriften miteinander in mannigfaltigen Beziehungen stehen und durch die Auseinandersetzung mit vorangegangenen sprachbeschreibenden Werken wirkmächtige Traditionslinien ausbilden, gehört seit Jahrhunderten zu den von Verfasserinnen und Verfassern solcher Texte gerne aufgegriffenen Gemeinplätzen. Während Wechselbeziehungen auf theoretischer Ebene (etwa im Zuge der Frage nach der Evolution verschiedener Grammatikmodelle) regelmäßig zum Gegenstand meta-grammatikographisch angelegter germanistischer Forschung werden, wurde das breite Feld der in den Texten herangezogenen Sprachbeispiele sowie des zugehörigen Bewertungsvokabulars bisher nur selten dafür verwendet, die Tiefe der Verflechtungen zwischen einzelnen Grammatiktexten auszuloten und die materielle Gestalt entsprechender Traditionen zu eruieren. Noch seltener gerieten dabei Impulse aus dem Bereich des Deutschen als Fremdsprache in den Fokus.
Die vorliegende Arbeit, deren Fundament ein umfangreiches Korpus einschlägiger deutsch- sowie englischsprachiger Grammatiktexte der deutschen Sprache bildet, hat vor diesem Hintergrund ein zweifaches Erkenntnisinteresse: Zum einen macht sie sich – im Sinne variationslinguistischer Grundlagenforschung – zur Aufgabe, den Umgang mit Varianten in der germanistischen Grammatikographie zwischen 1958 und 2015 mit den Mitteln computergestützter qualitativer Datenanalyse zu skizzieren. Zum anderen wird versucht, anhand dieser Wissensbestände intertextuelle Verweisstrukturen zu rekonstruieren, um so einen tieferen Einblick in die historische Genese dessen zu gewinnen, was als deutscher Grammatikkodex bezeichnet werden könnte. Dabei deutet sich nicht zuletzt an, dass einzelne englischsprachige Publikationen stark am grammatikographischen Diskurs partizipieren und das engmaschige Textnetz gerade zu Beginn des Untersuchungszeitraums von prägnanten Impulsen aus dem fremdsprachlichen Raum profitiert.