Refine
Has Fulltext
- yes (18)
Is part of the Bibliography
- yes (18)
Year of publication
Document Type
- Doctoral Thesis (18)
Language
- English (18) (remove)
Keywords
- Modellierung (18) (remove)
Institute
- Institut für Informatik (6)
- Institut für Geographie und Geologie (5)
- Betriebswirtschaftliches Institut (1)
- Graduate School of Life Sciences (1)
- Institut für Geographie (1)
- Institut für Geologie (1)
- Institut für Mathematik (1)
- Institut für Pharmazie und Lebensmittelchemie (1)
- Klinik und Poliklinik für Psychiatrie, Psychosomatik und Psychotherapie (1)
- Theodor-Boveri-Institut für Biowissenschaften (1)
Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues’ substantial consequences evoked strong efforts towards assessing the state of our environment.
Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model’s generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location’s land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model’s output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used.
Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth’s surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks.
In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks.
This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi–supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
The Mediterranean area reveals a strong vulnerability to future climate change due to a high exposure to projected impacts and a low capacity for adaptation highlighting the need for robust regional or local climate change projections, especially for extreme events strongly affecting the Mediterranean environment. The prevailing study investigates two major topics of the Mediterranean climate variability: the analysis of dynamical downscaling of present-day and future temperature and precipitation means and extremes from global to regional scale and the comprehensive investigation of temperature and rainfall extremes including the estimation of uncertainties and the comparison of different statistical methods for precipitation extremes. For these investigations, several observational datasets of CRU, E-OBS and original stations are used as well as ensemble simulations of the regional climate model REMO driven by the coupled global general circulation model ECHAM5/MPI-OM and applying future greenhouse gas (GHG) emission and land degradation scenarios.
A quantitative model of groundwater flows contributing to the Goblenz state water scheme at the north-western fringe of the Kalahari was developed within this study. The investigated area corresponds to the Upper Omatako basin and encompasses an outer mountainous rim and sediments of the Kalahari sand desert in the centre. This study revealed the eminent importance of the mountainous rim for the water balance of the Kalahari, both in terms of surface and ground water. A hydrochemical subdivision of groundwater types in the mountain rim around the Kalahari was derived from cluster analysis of hydrochemical groundwater data. The western and south-western secondary aquifers within rocks of the Damara Sequence, the Otavi Mountain karst aquifers of the Tsumeb and Abenab subgroups as well as the Waterberg Etjo sandstone aquifer represent the major hydrochemical groups. Ca/Mg and Sr/Ca ratios allowed to trace the groundwater flow from the Otavi Mountains towards the Kalahari near Goblenz. The Otavi Mountains and the Waterberg were identified as the main recharge areas showing almost no or only little isotopic enrichment by evaporation. Soil water balance modelling confirmed that direct groundwater recharge in hard-rock environments tends to be much higher than in areas covered with thick Kalahari sediments. According to the water balance model average recharge rates in hard-rock exposures with only thin sand cover are between 0.1 and 2.5 % of mean annual rainfall. Within the Kalahari itself very limited recharge was predicted (< 1 % of mean annual rainfall). In the Upper Omatako basin the highest recharge probability was found in February in the late rainfall season. The water balance model also indicated that surface runoff is produced sporadically, triggering indirect recharge events. Several sinkholes were discovered in the Otavi Foreland to the north of Goblenz forming short-cuts to the groundwater table and preferential recharge zones. Their relevance for the generation of indirect recharge could be demonstrated by stable isotope variations resulting from observed flood events. Within the Kalahari basin several troughs were identified in the pre-Kalahari surface by GIS-based analyses. A map of saturated thickness of Kalahari sediments revealed that these major troughs are partly saturated with groundwater. The main trough, extending from south-west to north-east, is probably connected to the Goblenz state water scheme and represents a major zone of groundwater confluence, receiving groundwater inflows from several recharge areas in the Upper Omatako basin. As a result of the dominance of mountain front recharge the groundwater of the Kalahari carries an isotopic composition of recharge at higher altitudes. The respective percentages of inflow into the Kalahari from different source areas were determined by a mixing-cell approach. According to the mixing model Goblenz receives most of its inflow (70 to 80 %) from a shallow Kalahari aquifer in the Otavi Foreland which is connected to the Otavi Mountains. Another 15 to 10 % of groundwater inflow to the Kalahari at Goblenz derive from the Etjo sandstone aquifer to the south and from inflow of a mixed component. In conclusion, groundwater abstraction at Goblenz will be affected by measures that heavily influence groundwater inflow from the Otavi Mountains, the Waterberg, and the fractured aquifer north of the Waterberg.
Dynamic interactions and their changes are at the forefront of current research in bioinformatics and systems biology. This thesis focusses on two particular dynamic aspects of cellular adaptation: miRNA and metabolites.
miRNAs have an established role in hematopoiesis and megakaryocytopoiesis, and platelet miRNAs have potential as tools for understanding basic mechanisms of platelet function. The thesis highlights the possible role of miRNAs in regulating protein translation in platelet lifespan with relevance to platelet apoptosis and identifying involved pathways and potential key regulatory molecules. Furthermore, corresponding miRNA/target mRNAs in murine platelets are identified. Moreover, key miRNAs involved in aortic aneurysm are predicted by similar techniques. The clinical relevance of miRNAs as biomarkers, targets, resulting later translational therapeutics, and tissue specific restrictors of genes expression in cardiovascular diseases is also discussed.
In a second part of thesis we highlight the importance of scientific software solution development in metabolic modelling and how it can be helpful in bioinformatics tool development along with software feature analysis such as performed on metabolic flux analysis applications. We proposed the “Butterfly” approach to implement efficiently scientific software programming. Using this approach, software applications were developed for quantitative Metabolic Flux Analysis and efficient Mass Isotopomer Distribution Analysis (MIDA) in metabolic modelling as well as for data management. “LS-MIDA” allows easy and efficient MIDA analysis and, with a more powerful algorithm and database, the software “Isotopo” allows efficient analysis of metabolic flows, for instance in pathogenic bacteria (Salmonella, Listeria). All three approaches have been published (see Appendices).
Oral antineoplastic drugs are an important component in the treatment of solid tumour diseases, haematological and immunological malignancies. Oral drug administration is associated with positive features (e.g., non-invasive drug administration, outpatient care with a high level of independence for the patient and reduced costs for the health care system). The systemic exposure after oral intake however is prone to high IIV as it strongly depends on gastrointestinal absorption processes, which are per se characterized by high inter-and intraindividual variability. Disease and patient-specific characteristics (e.g., disease state, concomitant diseases, concomitant medication, patient demographics) may additionally contribute to variability in plasma concentrations between individual patients. In addition, many oral antineoplastic drugs show complex PK, which has not yet been fully investigated and elucidated for all substances. All this may increase the risk of suboptimal plasma exposure (either subtherapeutic or toxic), which may ultimately jeopardise the success of therapy, either through a loss of efficacy or through increased, intolerable adverse drug reactions. TDM can be used to detect suboptimal plasma levels and prevent permanent under- or overexposure. It is essential in the treatment of ACC with mitotane, a substance with unfavourable PK and high IIV. In the current work a HPLC-UV method for the TDM of mitotane using VAMS was developed. A low sample volume (20 µl) of capillary blood was used in the developed method, which facilitates dense sampling e.g., at treatment initiation. However, no reference ranges for measurements from capillary blood are established so far and a simple conversion from capillary concentrations to plasma concentrations was not possible. To date the therapeutic range is established only for plasma concentrations and observed capillary concentrations could not be reliable interpretated.The multi-kinase inhibitor cabozantinib is also used for the treatment of ACC. However, not all PK properties, like the characteristic second peak in the cabozantinib concentration-time profile have been fully understood so far. To gain a mechanistic understanding of the compound, a PBPK model was developed and various theories for modelling the second peak were explored, revealing that EHC of the compound is most plausible. Cabozantinib is mainly metabolized via CYP3A4 and susceptible to DDI with e.g., CYP3A4 inducers. The DDI between cabozantinib and rifampin was investigated with the developed PBPK model and revealed a reduced cabozantinib exposure (AUC) by 77%. Hence, the combination of cabozantinib with strong CYP inducers should be avoided. If this is not possible, co administration should be monitored using TDM. The model was also used to simulate cabozantinib plasma concentrations at different stages of liver injury. This showed a 64% and 50% increase in total exposure for mild and moderate liver injury, respectively.Ruxolitinib is used, among others, for patients with acute and chronic GvHD. These patients often also receive posaconazole for invasive fungal prophylaxis leading to CYP3A4 mediated DDI between both substances. Different dosing recommendations from the FDA and EMA on the use of ruxolitinib in combination with posaconazole complicate clinical use. To simulate the effect of this relevant DDI, two separate PBPK models for ruxolitinib and posaconazole were developed and combined. Predicted ruxolitinib exposure was compared to observed plasma concentrations obtained in GvHD patients. The model simulations showed that the observed ruxolitinib concentrations in these patients were generally higher than the simulated concentrations in healthy individuals, with standard dosing present in both scenarios. According to the developed model, EMA recommended RUX dose reduction seems to be plausible as due to the complexity of the disease and intake of extensive co-medication, RUX plasma concentration can be higher than expected.
The present thesis considers the modelling of gas mixtures via a kinetic description. Fundamentals about the Boltzmann equation for gas mixtures and the BGK approximation are presented. Especially, issues in extending these models to gas mixtures are discussed. A non-reactive two component gas mixture is considered. The two species mixture is modelled by a system of kinetic BGK equations featuring two interaction terms to account for momentum and energy transfer between the two species. The model presented here contains several models from physicists and engineers as special cases. Consistency of this model is proven: conservation properties, positivity of all temperatures and the H-theorem. The form in global equilibrium as Maxwell distributions is specified. Moreover, the usual macroscopic conservation laws can be derived.
In the literature, there is another type of BGK model for gas mixtures developed by Andries, Aoki and Perthame, which contains only one interaction term. In this thesis, the advantages of these two types of models are discussed and the usefulness of the model presented here is shown by using this model to determine an unknown function in the energy exchange of the macroscopic equations for gas mixtures described in the literature by Dellacherie. In addition, for each of the two models existence and uniqueness of mild solutions is shown. Moreover, positivity of classical solutions is proven.
Then, the model presented here is applied to three physical applications: a plasma consisting of ions and electrons, a gas mixture which deviates from equilibrium and a gas mixture consisting of polyatomic molecules.
First, the model is extended to a model for charged particles. Then, the equations of magnetohydrodynamics are derived from this model. Next, we want to apply this extended model to a mixture of ions and electrons in a special physical constellation which can be found for example in a Tokamak. The mixture is partly in equilibrium in some regions, in some regions it deviates from equilibrium. The model presented in this thesis is taken for this purpose, since it has the advantage to separate the intra and interspecies interactions. Then, a new model based on a micro-macro decomposition is proposed in order to capture the physical regime of being partly in equilibrium, partly not. Theoretical results are presented, convergence rates to equilibrium in the space-homogeneous case and the Landau damping for mixtures, in order to compare it with numerical results.
Second, the model presented here is applied to a gas mixture which deviates from equilibrium such that it is described by Navier-Stokes equations on the macroscopic level. In this macroscopic description it is expected that four physical coefficients will show up, characterizing the physical behaviour of the gases, namely the diffusion coefficient, the viscosity coefficient, the heat conductivity and the thermal diffusion parameter. A Chapman-Enskog expansion of the model presented here is performed in order to capture three of these four physical coefficients. In addition, several possible extensions to an ellipsoidal statistical model for gas mixtures are proposed in order to capture the fourth coefficient. Three extensions are proposed: An extension which is as simple as possible, an intuitive extension copying the one species case and an extension which takes into account the physical motivation of the physicist Holway who invented the ellipsoidal statistical model for one species. Consistency of the extended models like conservation properties, positivity of all temperatures and the H-theorem are proven. The shape of global Maxwell distributions in equilibrium are specified.
Third, the model presented here is applied to polyatomic molecules. A multi component gas mixture with translational and internal energy degrees of freedom is considered. The two species are allowed to have different degrees of freedom in internal energy and are modelled by a system of kinetic ellipsoidal statistical equations. Consistency of this model is shown: conservation properties, positivity of the temperature, H-theorem and the form of Maxwell distributions in equilibrium. For numerical purposes the Chu reduction is applied to the developed model for polyatomic gases to reduce the complexity of the model and an application for a gas consisting of a mono-atomic and a diatomic gas is given.
Last, the limit from the model presented here to the dissipative Euler equations for gas mixtures is proven.
Landslide susceptibility assessment in the Chiconquiaco Mountain Range area, Veracruz (Mexico)
(2022)
In Mexico, numerous landslides occur each year and Veracruz represents the state with the third highest number of events. Especially the Chiconquiaco Mountain Range, located in the central part of Veracruz, is highly affected by landslides and no detailed information on the spatial distribution of existing landslides or future occurrences is available. This leaves the local population exposed to an unknown threat and unable to react appropriately to this hazard or to consider the potential landslide occurrence in future planning processes.
Thus, the overall objective of the present study is to provide a comprehensive assessment of the landslide situation in the Chiconquiaco Mountain Range area. Here, the combination of a site-specific and a regional approach enables to investigate the causes, triggers, and process types as well as to model the landslide susceptibility for the entire study area.
For the site-specific approach, the focus lies on characterizing the Capulín landslide, which represents one of the largest mass movements in the area. In this context, the task is to develop a multi-methodological concept, which concentrates on cost-effective, flexible and non-invasive methods. This approach shows that the applied methods complement each other very well and their combination allows for a detailed characterization of the landslide.
The analyses revealed that the Capulín landslide is a complex mass movement type. It comprises rotational movement in the upper parts and translational movement in the lower areas, as well as flow processes at the flank and foot area and therefore, is classified as a compound slide-flow according to Cruden and Varnes (1996). Furthermore, the investigations show that the Capulín landslide represents a reactivation of a former process. This is an important new information, especially with regard to the other landslides identified in the study area. Both the road reconstructed after the landslide, which runs through the landslide mass, and the stream causing erosion processes at the foot of the landslide severely affect the stability of the landslide, making it highly susceptible to future reactivation processes. This is particularly important as the landslide is located only few hundred meters from the village El Capulín and an extension of the landslide area could cause severe damage.
The next step in the landslide assessment consists of integrating the data obtained in the site-specific approach into the regional analysis. Here, the focus lies on transferring the generated data to the entire study area. The developed methodological concept yields applicable results, which is supported by different validation approaches.
The susceptibility modeling as well as the landslide inventory reveal that the highest probability of landslides occurrence is related to the areas with moderate slopes covered by slope deposits. These slope deposits comprise material from old mass movements and erosion processes and are highly susceptible to landslides. The results give new insights into the landslide situation in the Chiconquiaco Mountain Range area, since previously landslide occurrence was related to steep slopes of basalt and andesite.
The susceptibility map is a contribution to a better assessment of the landslide situation in the study area and simultaneously proves that it is crucial to include specific characteristics of the respective area into the modeling process, otherwise it is possible that the local conditions will not be represented correctly.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
In the future Internet, the people-centric communication paradigm will be complemented by a ubiquitous communication among people and devices, or even a communication between devices. This comes along with the need for a more flexible, cheap, widely available Internet access. Two types of wireless networks are considered most appropriate for attaining those goals. While wireless sensor networks (WSNs) enhance the Internet’s reach by providing data about the properties of the environment, wireless mesh networks (WMNs) extend the Internet access possibilities beyond the wired backbone. This monograph contains four chapters which present modeling and optimization methods for WSNs and WMNs. Minimizing energy consumptions is the most important goal of WSN optimization and the literature consequently provides countless energy consumption models. The first part of the monograph studies to what extent the used energy consumption model influences the outcome of analytical WSN optimizations. These considerations enable the second contribution, namely overcoming the problems on the way to a standardized energy-efficient WSN communication stack based on IEEE 802.15.4 and ZigBee. For WMNs both problems are of minor interest whereas the network performance has a higher weight. The third part of the work, therefore, presents algorithms for calculating the max-min fair network throughput in WMNs with multiple link rates and Internet gateway. The last contribution of the monograph investigates the impact of the LRA concept which proposes to systematically assign more robust link rates than actually necessary, thereby allowing to exploit the trade-off between spatial reuse and per-link throughput. A systematical study shows that a network-wide slightly more conservative LRA than necessary increases the throughput of a WMN where max-min fairness is guaranteed. It moreover turns out that LRA is suitable for increasing the performance of a contention-based WMN and is a valuable optimization tool.