Refine
Has Fulltext
- yes (31)
Is part of the Bibliography
- yes (31)
Year of publication
Document Type
Keywords
- Modellierung (31) (remove)
Institute
- Institut für Geographie und Geologie (10)
- Institut für Informatik (8)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Institut für Geologie (2)
- Physikalisches Institut (2)
- Betriebswirtschaftliches Institut (1)
- Graduate School of Life Sciences (1)
- Institut für Geographie (1)
- Institut für Mathematik (1)
- Institut für Pharmazie und Lebensmittelchemie (1)
Sonstige beteiligte Institutionen
Oral antineoplastic drugs are an important component in the treatment of solid tumour diseases, haematological and immunological malignancies. Oral drug administration is associated with positive features (e.g., non-invasive drug administration, outpatient care with a high level of independence for the patient and reduced costs for the health care system). The systemic exposure after oral intake however is prone to high IIV as it strongly depends on gastrointestinal absorption processes, which are per se characterized by high inter-and intraindividual variability. Disease and patient-specific characteristics (e.g., disease state, concomitant diseases, concomitant medication, patient demographics) may additionally contribute to variability in plasma concentrations between individual patients. In addition, many oral antineoplastic drugs show complex PK, which has not yet been fully investigated and elucidated for all substances. All this may increase the risk of suboptimal plasma exposure (either subtherapeutic or toxic), which may ultimately jeopardise the success of therapy, either through a loss of efficacy or through increased, intolerable adverse drug reactions. TDM can be used to detect suboptimal plasma levels and prevent permanent under- or overexposure. It is essential in the treatment of ACC with mitotane, a substance with unfavourable PK and high IIV. In the current work a HPLC-UV method for the TDM of mitotane using VAMS was developed. A low sample volume (20 µl) of capillary blood was used in the developed method, which facilitates dense sampling e.g., at treatment initiation. However, no reference ranges for measurements from capillary blood are established so far and a simple conversion from capillary concentrations to plasma concentrations was not possible. To date the therapeutic range is established only for plasma concentrations and observed capillary concentrations could not be reliable interpretated.The multi-kinase inhibitor cabozantinib is also used for the treatment of ACC. However, not all PK properties, like the characteristic second peak in the cabozantinib concentration-time profile have been fully understood so far. To gain a mechanistic understanding of the compound, a PBPK model was developed and various theories for modelling the second peak were explored, revealing that EHC of the compound is most plausible. Cabozantinib is mainly metabolized via CYP3A4 and susceptible to DDI with e.g., CYP3A4 inducers. The DDI between cabozantinib and rifampin was investigated with the developed PBPK model and revealed a reduced cabozantinib exposure (AUC) by 77%. Hence, the combination of cabozantinib with strong CYP inducers should be avoided. If this is not possible, co administration should be monitored using TDM. The model was also used to simulate cabozantinib plasma concentrations at different stages of liver injury. This showed a 64% and 50% increase in total exposure for mild and moderate liver injury, respectively.Ruxolitinib is used, among others, for patients with acute and chronic GvHD. These patients often also receive posaconazole for invasive fungal prophylaxis leading to CYP3A4 mediated DDI between both substances. Different dosing recommendations from the FDA and EMA on the use of ruxolitinib in combination with posaconazole complicate clinical use. To simulate the effect of this relevant DDI, two separate PBPK models for ruxolitinib and posaconazole were developed and combined. Predicted ruxolitinib exposure was compared to observed plasma concentrations obtained in GvHD patients. The model simulations showed that the observed ruxolitinib concentrations in these patients were generally higher than the simulated concentrations in healthy individuals, with standard dosing present in both scenarios. According to the developed model, EMA recommended RUX dose reduction seems to be plausible as due to the complexity of the disease and intake of extensive co-medication, RUX plasma concentration can be higher than expected.
Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues’ substantial consequences evoked strong efforts towards assessing the state of our environment.
Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model’s generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location’s land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model’s output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used.
Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth’s surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks.
In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks.
This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi–supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment.
Empathy, the act of sharing another person’s affective state, is a ubiquitous driver for helping others and feeling close to them. These experiences are integral parts of human behavior and society. The studies presented in this dissertation aimed to investigate the sustainability and stability of social closeness and prosocial decision-making driven by empathy and other social motives. In this vein, four studies were conducted in which behavioral and neural indicators of empathy sustainability were identified using model-based functional magnetic resonance imaging (fMRI).
Applying reinforcement learning, drift-diffusion modelling (DDM), and fMRI, the first two studies were designed to investigate the formation and sustainability of empathy-related social closeness (study 1) and examined how sustainably empathy led to prosocial behavior (study 2). Using DDM and fMRI, the last two studies investigated how empathy combined with reciprocity, the social norm to return a favor, on the one hand and empathy combined with the motive of outcome maximization on the other hand altered the behavioral and neural social decision process.
The results showed that empathy-related social closeness and prosocial decision tendencies persisted even if empathy was rarely reinforced. The sustainability of these empathy effects was related to recalibration of the empathy-related social closeness learning signal (study 1) and the maintenance of a prosocial decision bias (study 2). The findings of study 3 showed that empathy boosted the processing of reciprocity-based social decisions, but not vice versa. Study 4 revealed that empathy-related decisions were modulated by the motive of outcome maximization, depending on individual differences in state empathy.
Together, the studies strongly support the concept of empathy as a sustainable driver of social closeness and prosocial behavior.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
Landslide susceptibility assessment in the Chiconquiaco Mountain Range area, Veracruz (Mexico)
(2022)
In Mexico, numerous landslides occur each year and Veracruz represents the state with the third highest number of events. Especially the Chiconquiaco Mountain Range, located in the central part of Veracruz, is highly affected by landslides and no detailed information on the spatial distribution of existing landslides or future occurrences is available. This leaves the local population exposed to an unknown threat and unable to react appropriately to this hazard or to consider the potential landslide occurrence in future planning processes.
Thus, the overall objective of the present study is to provide a comprehensive assessment of the landslide situation in the Chiconquiaco Mountain Range area. Here, the combination of a site-specific and a regional approach enables to investigate the causes, triggers, and process types as well as to model the landslide susceptibility for the entire study area.
For the site-specific approach, the focus lies on characterizing the Capulín landslide, which represents one of the largest mass movements in the area. In this context, the task is to develop a multi-methodological concept, which concentrates on cost-effective, flexible and non-invasive methods. This approach shows that the applied methods complement each other very well and their combination allows for a detailed characterization of the landslide.
The analyses revealed that the Capulín landslide is a complex mass movement type. It comprises rotational movement in the upper parts and translational movement in the lower areas, as well as flow processes at the flank and foot area and therefore, is classified as a compound slide-flow according to Cruden and Varnes (1996). Furthermore, the investigations show that the Capulín landslide represents a reactivation of a former process. This is an important new information, especially with regard to the other landslides identified in the study area. Both the road reconstructed after the landslide, which runs through the landslide mass, and the stream causing erosion processes at the foot of the landslide severely affect the stability of the landslide, making it highly susceptible to future reactivation processes. This is particularly important as the landslide is located only few hundred meters from the village El Capulín and an extension of the landslide area could cause severe damage.
The next step in the landslide assessment consists of integrating the data obtained in the site-specific approach into the regional analysis. Here, the focus lies on transferring the generated data to the entire study area. The developed methodological concept yields applicable results, which is supported by different validation approaches.
The susceptibility modeling as well as the landslide inventory reveal that the highest probability of landslides occurrence is related to the areas with moderate slopes covered by slope deposits. These slope deposits comprise material from old mass movements and erosion processes and are highly susceptible to landslides. The results give new insights into the landslide situation in the Chiconquiaco Mountain Range area, since previously landslide occurrence was related to steep slopes of basalt and andesite.
The susceptibility map is a contribution to a better assessment of the landslide situation in the study area and simultaneously proves that it is crucial to include specific characteristics of the respective area into the modeling process, otherwise it is possible that the local conditions will not be represented correctly.
Das Ziel dieser Arbeit war neue Eingangsdaten für die Landoberflächenbeschreibung des regionalen Klimamodells REMO zu finden und ins Modell zu integrieren, um die Vorhersagequalität des Modells zu verbessern. Die neuen Daten wurden so in das Modell eingebaut, dass die bisherigen Daten weiterhin als Option verfügbar sind. Dadurch kann überprüft werden, ob und in welchem Umfang sich die von jedem Klimamodell benötigten Rahmendaten auf Modellergebnisse auswirken. Im Zuge der Arbeit wurden viele unterschiedliche Daten und Methoden zur Generierung neuer Parameter miteinander verglichen, denn neben dem Ersetzen der konstanten Eingangswerte für verschiedene Oberflächenparameter und den damit verbundenen Änderungen wurden als zusätzliche Verbesserung auch Veränderungen an der Parametrisierung des Bodens speziell in Hinblick auf die Bodentemperaturen in REMO vorgenommen. Im Rahmen dieser Arbeit wurden die durch die verschiedenen Änderungen ausgelösten Auswirkungen für das CORDEX-Gebiet EUR-44 mit einer Auflösung von ca. 50km und für das in dem darin eingebetteten neu definierten Deutschlandgebiet GER-11 mit einer Auflösung von ca. 12km getestet sowie alle Änderungen anhand von verschiedenen Beobachtungsdatensätzen validiert.
Die vorgenommenen Arbeiten gliederten sich in drei Hauptteile. Der erste Teil bestand in dem vom eigentlichen Klimamodell unabhängigen Vergleich der verschiedenen Eingangsdaten auf unterschiedlichen Auflösungen und deren Performanz in allen Teilen der Erde, wobei ein besonderer Fokus auf der Qualität in den späteren Modellgebieten lag. Unter Berücksichtigung der Faktoren, wie einer globalen Verfügbarkeit der Daten, einer verbesserten räumlichen Auflösung und einer kostenlosen Nutzung der Daten sowie verschiedener Validationsergebnissen von anderen Studien, wurden in dieser Arbeit vier neue Topographiedatensätze (SRTM, ALOS, TANDEM und ASTER) und drei neue Bodendatensätze (FAOn, Soilgrid und HWSD) für die Verwendung im Präprozess von REMO aufbereitet und miteinander sowie mit den bisher in REMO verwendeten Daten verglichen. Auf Grundlage dieser Vergleichsstudien schieden bei den Topographiedaten die verwendeten Datensatz-Versionen von SRTM, ALOS und TANDEM für die in dieser Arbeit durchgeführten REMO-Läufe aus. Bei den neuen Bodendatensätzen wurde ausgenutzt, dass diese verschiedenen Bodeneigenschaften für unterschiedliche Tiefen als Karten zur Verfügung stellen. In REMO wurden bisher alle benötigten Bodenparameter abhängig von fünf verschiedenen Bodentexturklassen und einer zusätzlichen Torfklasse ausgewiesen und als konstant über die gesamte Modellbodensäule (bis ca. 10m) angenommen. Im zweiten Teil wurden auf Basis der im ersten Teil ausgewählten neuen Datensätze und den neu verfügbaren Bodenvariablen verschiedene Sensitivitätsstudien über das Beispieljahr 2000 durchgeführt. Dabei wurden verschiedene neue Parametrisierungen für die bisher aus der Textur abgeleiteten Bodenvariablen und die Parametrisierung von weiteren hydrologischen und thermalen Bodeneigenschaften verglichen. Ferner wurde aufgrund der neuen nicht über die Tiefe konstanten Bodeneigenschaften eine neue numerische Methode zur Berechnung der Bodentemperaturen der fünf Schichten in REMO getestet, welche wiederum andere Anpassungen erforderte. Der Test und die Auswahl der verschiedenen Datensatz- und Parametrisierungsversionen auf die Modellperformanz wurde in drei Experimentpläne unterteilt. Im ersten Plan wurden die Auswirkungen der ausgewählten Topographie- und Bodendatensätze überprüft. Der zweite Plan behandelte die Unterschiede der verschiedenen Parametrisierungsarten der Bodenvariablen hinsichtlich der verwendeten Variablen zur Berechnung der Bodeneigenschaften, der über die Tiefe variablen oder konstanten Eigenschaften und der verwendeten Berechnungsmethode der Bodentemperaturänderungen. Durch die Erkenntnisse aus diesen beiden Experimentplänen, die für beide Untersuchungsgebiete durchgeführt wurden, ergaben sich im dritten Plan weitere Parametrisierungsänderungen. Alle Änderungen dieses dritten Experimentplans wurden sukzessiv getestet, sodass der paarweise Vergleich von zwei aufeinanderfolgenden Modellläufen die Auswirkungen der Neuerung im jeweils zweiten Lauf widerspiegelt. Der letzte Teil der Arbeit bestand aus der Analyse von fünf längeren Modellläufen (2000-2018), die zur Überprüfung der Ergebnisse aus den Sensitivitätsstudien sowie zur Einschätzung der Performanz in weiteren teilweise extremen atmosphärischen Bedingungen durchgeführt wurden. Hierfür wurden die bisherige Modellversion von REMO (id01) für die beiden Untersuchungsgebiete EUR-44 und GER-11 als Referenzläufe, zwei aufgrund der Vergleichsergebnisse von Experimentplan 3 selektierte Modellversionen (id06 und id15a für GER-11) sowie die finale Version (id18a für GER-11), die alle vorgenommenen Änderungen dieser Arbeit enthält, ausgewählt.
Es stellte sich heraus, dass sowohl die neuen Topographiedaten als auch die neuen Bodendaten große Differenzen zu den bisherigen Daten in REMO haben. Zudem änderten sich die von diesen konstanten Eingangsdaten abgeleiteten Hilfsvariablen je nach verwendeter Parametrisierung sehr deutlich. Dies war besonders gut anhand der Bodenparameter zu erkennen. Sowohl die räumliche Verteilung als auch der Wertebereich der verschiedenen Modellversionen unterschieden sich stark. Eine Einschätzung der Qualität der resultierenden Parameter wurde jedoch dadurch erschwert, dass auch die verschiedenen zur Validierung herangezogenen Bodendatensätze für diese Parameter deutlich voneinander abweichen. Die finale Modellversion id18a ähnelte trotz der umfassenden Änderungen in den meisten Variablen den Ergebnissen der bisherigen REMO-Version. Je nach zeitlicher und räumlicher Aggregation sowie unterschiedlichen Regionen und Jahreszeiten wurden leichte Verbesserungen, aber auch leichte Verschlechterungen im Vergleich zu den klimatologischen Validationsdaten festgestellt. Größere Veränderungen im Vergleich zur bisherigen Modellversion konnten in den tieferen Bodenschichten aufgezeigt werden, welche allerdings aufgrund von fehlenden Validationsdaten nicht beurteilt werden konnten. Für alle 2m-Temperaturen konnte eine tendenzielle leichte Erwärmung im Vergleich zum bisherigen Modelllauf beobachtet werden, was sich einerseits negativ auf die ohnehin durchschnittlich zu hohe Minimumtemperatur, aber andererseits positiv auf die bisher zu niedrige Maximumtemperatur des Modells in den betrachteten Gebieten auswirkte. Im Niederschlagssignal und in den 10m-Windvariablen konnten keine signifikanten Änderungen nachgewiesen werden, obwohl die neue Topographie an manchen Stellen im Modellgebiet deutlich von der bisherigen abweicht. Des Weiteren variierte das Ranking der verschiedenen Modellversionen jeweils nach dem angewendeten Qualitätsindex.
Um diese Ergebnisse besser einordnen zu können, muss berücksichtigt werden, dass die neuen Daten für Modellgebiete mit 50 bzw. 12km räumlicher Auflösung und der damit verbundenen hydrostatischen Modellversion getestet wurden. Zudem sind vor allem in Fall der Topographie die bisher enthaltenen GTOPO-Daten (1km Auflösung) für die Aggregation auf diese gröbere Modellauflösung geeignet. Die bisherigen Bodendaten stoßen jedoch mit 50km Auflösung bereits an ihre Grenzen. Zusätzlich ist zu beachten, dass nicht nur die Mittelwerte dieser Daten, sondern auch deren Subgrid-Variabilität als Variablen im Modell für verschiedene Parametrisierungen verwendet werden. Daher ist es essentiell, dass die Eingangsdaten eine deutlich höhere Auflösung bereitstellen als die zur Modellierung definierte Auflösung. Für lokale Klimasimulationen mit Auflösungen im niedrigen Kilometerbereich spielen auch die Vertikalbewegungen (nicht-hydrostatische Modellversion) eine wichtige Rolle, die stark von der Topographie sowie deren horizontaler und vertikaler Änderungsrate beeinflusst werden, was die in dieser Arbeit eingebauten wesentlich höher aufgelösten Daten für die zukünftige Weiterentwicklung von REMO wertvoll machen kann.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.
The present thesis considers the modelling of gas mixtures via a kinetic description. Fundamentals about the Boltzmann equation for gas mixtures and the BGK approximation are presented. Especially, issues in extending these models to gas mixtures are discussed. A non-reactive two component gas mixture is considered. The two species mixture is modelled by a system of kinetic BGK equations featuring two interaction terms to account for momentum and energy transfer between the two species. The model presented here contains several models from physicists and engineers as special cases. Consistency of this model is proven: conservation properties, positivity of all temperatures and the H-theorem. The form in global equilibrium as Maxwell distributions is specified. Moreover, the usual macroscopic conservation laws can be derived.
In the literature, there is another type of BGK model for gas mixtures developed by Andries, Aoki and Perthame, which contains only one interaction term. In this thesis, the advantages of these two types of models are discussed and the usefulness of the model presented here is shown by using this model to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described in the literature by Dellacherie. In addition, for each of the two models existence and uniqueness of mild solutions is shown. Moreover, positivity of classical solutions is proven.
Then, the model presented here is applied to three physical applications: a plasma consisting of ions and electrons, a gas mixture which deviates from equilibrium and a gas mixture consisting of polyatomic molecules.
First, the model is extended to a model for charged particles. Then, the equations of magnetohydrodynamics are derived from this model. Next, we want to apply this extended model to a mixture of ions and electrons in a special physical constellation which can be found for example in a Tokamak. The mixture is partly in equilibrium in some regions, in some regions it deviates from equilibrium. The model presented in this thesis is taken for this purpose, since it has the advantage to separate the intra and interspecies interactions. Then, a new model based on a micro-macro decomposition is proposed in order to capture the physical regime of being partly in equilibrium, partly not. Theoretical results are presented, convergence rates to equilibrium in the space-homogeneous case and the Landau damping for mixtures, in order to compare it with numerical results.
Second, the model presented here is applied to a gas mixture which deviates from equilibrium such that it is described by Navier-Stokes equations on the macroscopic level. In this macroscopic description it is expected that four physical coefficients will show up, characterizing the physical behaviour of the gases, namely the diffusion coefficient, the viscosity coefficient, the heat conductivity and the thermal diffusion parameter. A Chapman-Enskog expansion of the model presented here is performed in order to capture three of these four physical coefficients. In addition, several possible extensions to an ellipsoidal statistical model for gas mixtures are proposed in order to capture the fourth coefficient. Three extensions are proposed: An extension which is as simple as possible, an intuitive extension copying the one species case and an extension which takes into account the physical motivation of the physicist Holway who invented the ellipsoidal statistical model for one species. Consistency of the extended models like conservation properties, positivity of all temperatures and the H-theorem are proven. The shape of global Maxwell distributions in equilibrium are specified.
Third, the model presented here is applied to polyatomic molecules. A multi component gas mixture with translational and internal energy degrees of freedom is considered. The two species are allowed to have different degrees of freedom in internal energy and are modelled by a system of kinetic ellipsoidal statistical equations. Consistency of this model is shown: conservation properties, positivity of the temperature, H-theorem and the form of Maxwell distributions in equilibrium. For numerical purposes the Chu reduction is applied to the developed model for polyatomic gases to reduce the complexity of the model and an application for a gas consisting of a mono-atomic and a diatomic gas is given.
Last, the limit from the model presented here to the dissipative Euler equations for gas mixtures is proven.
Der anthropogene Klimawandel ist eine der größten Herausforderungen des 21. Jahrhunderts. Eine Hauptschwierigkeit liegt dabei in der Unsicherheit bezüglich der regionalen Änderung von Niederschlag und Temperatur. Hierdurch wird die Entwicklung geeigneter Anpassungsstrategien deutlich erschwert.
In der vorliegenden Arbeit werden vier Evaluationsansätze mit insgesamt 13 Metriken für aktuelle globale (zwei Generationen) und regionale Klimamodelle entwickelt und verglichen, um anschließend eine Analyse der Projektionsunsicherheit vorzunehmen. Basierend auf den erstellten Modellbewertungen werden durch Gewichtung Aussagen über den Unsicherheitsbereich des zukünftigen Klimas getroffen. Die Evaluation der Modelle wird im Mittelmeerraum sowie in acht Unterregionen durchgeführt. Dabei wird der saisonale Trend von Temperatur und Niederschlag im Evaluationszeitraum 1960–2009 ausgewertet. Zusätzlich wird für bestimmte Metriken jeweils das klimatologische Mittel oder die harmonischen Zeitreiheneigenschaften evaluiert. Abschließend werden zum Test der Übertragbarkeit der Ergebnisse neben den Hauptuntersuchungsgebieten sechs global verteilte Regionen untersucht. Außerdem wird die zeitliche Konsistenz durch Analyse eines zweiten, leicht versetzten Evaluationszeitraums behandelt, sowie die Abhängigkeit der Modellbewertungen von verschiedenen Referenzdaten mit Hilfe von insgesamt drei Referenzdatensätzen untersucht.
Die Ergebnisse legen nahe, dass nahezu alle Metriken zur Modellevaluierung geeignet sind. Die Auswertung unterschiedlicher Variablen und Regionen erzeugt Modellbewertungen, die sich in den Kontext aktueller Forschungsergebnisse einfügen. So wurde die Leistung der globalen Klimamodelle der neusten Generation (2013) im Vergleich zur Vorgängergeneration (2007) im Schnitt ähnlich hoch bzw. in vielen Situationen auch stärker eingeordnet. Ein durchweg bestes Modell konnte nicht festgestellt werden. Der Großteil der entwickelten Metriken zeigt für ähnliche Situationen übereinstimmende Modellbewertungen. Bei der Gewichtung hat sich der Niederschlag als besonders geeignet herausgestellt. Grund hierfür sind die im Schnitt deutlichen Unterschiede der Modellleistungen in Zusammenhang mit einer geringeren Simulationsgüte. Umgekehrt zeigen die Metriken für die Modelle der Temperatur allgemein überwiegend hohe Evaluationsergebnisse, wodurch nur wenig Informationsgewinn durch Gewichtung erreicht werden kann. Während die Metriken gut für unterschiedliche Regionen und Skalenniveaus verwendet werden Evaluationszeiträume nicht grundsätzlich gegeben. Zusätzlich zeigen die Modellranglisten unterschiedlicher Regionen und Jahreszeiten häufig nur geringe Korrelationen. Dies gilt besonders für den Niederschlag. Bei der Temperatur sind hingegen leichte Übereinstimmungen auszumachen. Beim Vergleich der mittleren Ranglisten über alle Modellbewertungen und Situationen der Hauptregionen des Mittelmeerraums mit den Globalregionen besteht eine signifikante Korrelation von 0,39 für Temperatur, während sie für Niederschlag um null liegt. Dieses Ergebnis ist für alle drei verwendeten Referenzdatensätze im Mittelmeerraum gültig. So schwankt die Korrelation der Modellbewertungen des Niederschlags für unterschiedliche Referenzdatensätze immer um Null und die der Temperaturranglisten zwischen 0,36 und 0,44. Generell werden die Metriken als geeignete Evaluationswerkzeuge für Klimamodelle eingestuft. Daher können sie einen Beitrag zur Änderung des Unsicherheitsbereichs und damit zur Stärkung des Vertrauens in Klimaprojektionen leisten.
Die Abhängigkeit der Modellbewertungen von Region und Untersuchungszeitraum muss dabei jedoch berücksichtigt werden. So besitzt die Analyse der Konsistenz von Modellbewertungen sowie der Stärken und Schwächen der Klimamodelle großes Potential für folgende Studien, um das Vertrauen in Modellprojektionen weiter zu steigern.