Refine
Has Fulltext
- yes (30)
Is part of the Bibliography
- yes (30)
Year of publication
Document Type
- Doctoral Thesis (30) (remove)
Keywords
- Modellierung (30) (remove)
Institute
- Institut für Geographie und Geologie (9)
- Institut für Informatik (8)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Institut für Geologie (2)
- Physikalisches Institut (2)
- Betriebswirtschaftliches Institut (1)
- Graduate School of Life Sciences (1)
- Institut für Geographie (1)
- Institut für Mathematik (1)
- Institut für Pharmazie und Lebensmittelchemie (1)
Sonstige beteiligte Institutionen
The study investigates the water resources and aquifer dynamics of the igneous fractured aquifer-system of the Troodos Mountains in Cyprus, using a coupled, finite differences water balance and groundwater modelling approach. The numerical water balance modelling forms the quantitative framework by assessing groundwater recharge and evapotranspiration, which form input parameters for the groundwater flow models. High recharge areas are identified within the heavily fractured Gabbro and Sheeted Dyke formations in the upper Troodos Mountains, while the impervious Pillow Lava promontories - with low precipitation and high evapotranspiration - show unfavourable recharge conditions. Within the water balance studies, evapotranspiration is split into actual evapotranspiration and the so called secondary evapotranspiration, representing the water demand for open waters, moist and irrigated areas. By separating the evapotranspiration of open waters and moist areas from the one of irrigated areas, groundwater abstraction needs are quantified, allowing the simulation of single well abstraction rates in the groundwater flow models. Two sets of balanced groundwater models simulate the aquifer dynamics in the presented study: First, the basic groundwater percolation system is investigated using two-dimensional vertical flow models along geological cross-sections, depicting the entire Troodos Mountains up to a depth of several thousands of metres. The deeply percolating groundwater system starts in the high recharge areas of the upper Troodos, shows quasi stratiform flow in the Gabbro and Sheeted Dyke formations, and rises to the surface in the vicinity of the impervious Pillow Lava promontories. The residence times mostly yield less than 25 years, the ones of the deepest fluxes several hundreds of years. Moreover, inter basin flow and indirect recharge of the Circum Troodos Sedimentary Succession are identified. In a second step, the upper and most productive part of the fractured igneous aquifer-system is investigated in a regional, horizontal groundwater model, including management scenarios and inter catchment flow studies. In a natural scenario without groundwater abstractions, the recovery potential of the aquifer is tested. Predicted future water demand is simulated in an increased abstraction scenario. The results show a high sensitivity to well abstraction rate changes in the Pillow Lava and Basal Group promontories. The changes in groundwater heads range from a few tens of metres up to more than one hundred metres. The sensitivity in the more productive parts of the aquifer-system is lower. Inter-catchment flow studies indicate that - besides the dominant effluent conditions in the Troodos Mountains - single reaches show influent conditions and are sub-flown by groundwater. These fluxes influence the local water balance and generate inter catchment flow. The balanced groundwater models form thus a comprehensive modelling system, supplying future detail models with information concerning boundary conditions and inter-catchment flow, and allowing the simulation of impacts of landuse or climate change scenarios on the dynamics and water resources of the Troodos aquifer-system.
The urban micro climate has been increasingly recognised as an important aspect for urban planning. Therefore, urban planners need reliable information on the micro climatic characteristics of the urban environment. A suitable spatial scale and large spatial coverage are important requirements for such information. This thesis presents a conceptual framework for the use of airborne hyperspectral data to support urban micro climate characterisation, taking into account the information needs of urban planning. The potential of hyperspectral remote sensing in characterising the micro climate is demonstrated and evaluated by applying HyMap airborne hyperspectral and height data to a case study of the German city of Munich. The developed conceptual framework consists of three parts. The first is concerned with the capabilities of airborne hyperspectral remote sensing to map physical urban characteristics. The high spatial resolution of the sensor allows to separate the relatively small urban objects. The high spectral resolution enables the identification of the large range of surface materials that are used in an urban area at up to sub-pixel level. The surface materials are representative for the urban objects of which the urban landscape is composed. These spatial urban characteristics strongly influence the urban micro climate. The second part of the conceptual framework provides an approach to use the hyperspectral surface information for the characterisation of the urban micro climate. This can be achieved by integrating the remote sensing material map into a micro climate model. Also spatial indicators were found to provide useful information on the micro climate for urban planners. They are commonly used in urban planning to describe building blocks and are related to several micro climatic parameters such as temperature and humidity. The third part of the conceptual framework addresses the combination and presentation of the derived indicators and simulation results under consideration of the planning requirements. Building blocks and urban structural types were found to be an adequate means to group and present the derived information for micro climate related questions to urban planners. The conceptual framework was successfully applied to a case study in Munich. Airborne hyperspectral HyMap data has been used to derive a material map at sub-pixel level by multiple endmember linear spectral unmixing. This technique was developed by the German Research Centre for Geosciences (GFZ) for applications in Dresden and Potsdam. A priori information on building locations was used to support the separation between spectrally similar materials used both on building roofs and non-built surfaces. In addition, surface albedo and leaf area index are derived from the HyMap data. The sub-pixel material map supported by object height data is then used to derive spatial indicators, such as imperviousness or building density. To provide a more detailed micro climate characterisation at building block level, the surface materials, albedo, leaf area index (LAI) and object height are used as input for simulations with the micro climate model ENVI-met. Concluding, this thesis demonstrated the potential of hyperspectral remote sensing to support urban micro climate characterisation. A detailed mapping of surface materials at sub-pixel level could be performed. This provides valuable, detailed information on a large range of spatial characteristics relevant to the assessment of the urban micro climate. The developed conceptual framework has been proven to be applicable to the case study, providing a means to characterise the urban micro climate. The remote sensing products and subsequent micro climatic information are presented at a suitable spatial scale and in understandable maps and graphics. The use of well-known spatial indicators and the framework of urban structural types can simplify the communication with urban planners on the findings on the micro climate. Further research is needed primarily on the sensitivity of the micro climate model towards the remote sensing based input parameters and on the general relation between climate parameters and spatial indicators by comparison with other cities.
The first problem is that of the optimal volume allocation in procurement. The choice of this problem was motivated by a study whose objective was to support decision-making at two procurement organizations for the procurement of Depot Medroxyprogesterone Acetate (DMPA), an injectable contraceptive. At the time of this study, only one supplier that had undergone the costly and lengthy process of WHO pre-qualification was available to these organizations. However, a new entrant supplier was expected to receive WHO qualification within the next year, thus becoming a viable second source for DMPA procurement. When deciding how to allocate the procurement volume between the two suppliers, the buyers had to consider the impact on price as well as risk. Higher allocations to one supplier yield lower prices but expose a buyer to higher supply risks, while an even allocation will result in lower supply risk but also reduce competitive pressure, resulting in higher prices. Our research investigates this single- versus dual-sourcing problem and quantifies in one model the impact of the procurement volume on competition and risk. To support decision-makers, we develop a mathematical framework that accounts for the characteristics of donor-funded global health markets and models the effects of an entrant on purchasing costs and supply risks. Our in-depth analysis provides insights into how the optimal allocation decision is affected by various parameters and explores the trade-off between competition and supply risk. For example, we find that, even if the entrant supplier introduces longer leads times and a higher default risk, the buyer still benefits from dual sourcing. However, these risk-diversification benefits depend heavily on the entrant’s in-country registration: If the buyer can ship the entrant’s product to only a selected number of countries, the buyer does not benefit from dual sourcing as much as it would if entrant’s product could be shipped to all supplied countries. We show that the buyer should be interested in qualifying the entrant’s product in countries with high demand first.
In the second problem we explore a new tendering mechanism called the postponement tender, which can be useful when buyers in the global health industry want to contract new generics suppliers with uncertain product quality. The mechanism allows a buyer to postpone part of the procurement volume’s allocation so the buyer can learn about the unknown quality before allocating the remaining volume to the best supplier in terms of both price and quality. We develop a mathematical model to capture the decision-maker’s trade-offs in setting the right split between the initial volume and the postponed volume. Our analysis shows that a buyer can benefit from this mechanism more than it can from a single-sourcing format, as it can decrease the risk of receiving poor quality (in terms of product quality and logistics performance) and even increase competitive pressure between the suppliers, thereby lowering the purchasing costs. By considering market parameters like the buyer’s size, the suppliers’ value (difference between quality and cost), quality uncertainty, and minimum order volumes, we derive optimal sourcing strategies for various market structures and explore how competition is affected by the buyer’s learning about the suppliers’ quality through the initial volume.
The third problem considers the repeated procurement problem of pharmacies in Kenya that have multi-product inventories. Coordinating orders allows pharmacies to achieve lower procurement prices by using the quantity discounts manufacturers offer and sharing fixed ordering costs, such as logistics costs. However, coordinating and optimizing orders for multiple products is complex and costly. To solve the coordinated procurement problem, also known as the Joint Replenishment Problem (JRP) with quantity discounts, a novel, data-driven inventory policy using sample-average approximation is proposed. The inventory policy is developed based on renewal theory and is evaluated using real-world sales data from Kenyan pharmacies. Multiple benchmarks are used to evaluate the performance of the approach. First, it is compared to the theoretically optimal policy --- that is, a dynamic-programming policy --- in the single-product setting without quantity discounts to show that the proposed policy results in comparable inventory costs. Second, the policy is evaluated for the original multi-product setting with quantity discounts and compared to ex-post optimal costs. The evaluation shows that the policy’s performance in the multi-product setting is similar to its performance in the single-product setting (with respect to ex-post optimal costs), suggesting that the proposed policy offers a promising, data-driven solution to these types of multi-product inventory problems.
Thrombozyten (Blutplättchen) sind die Vermittler der zellulären Hämostase. Ihre Fähigkeit zu Aggregieren und sich an das umgebende Gewebe verletzter Blutgefässe anzulagern, wird durch ein komplexes intrazelluläres Signaltransduktionsnetzwerk bestimmt, das sowohl aktivierende, als auch inhibierende Subnetzwerke beinhaltet. Das Verständnis dieser Prozesse ist von hoher medizinischer Bedeutung. Im Rahmen dieser Arbeit wurde die thrombozytäre Signaltransduktion sowohl mittels eines Boole'schen, als auch verschiedener dynamischer Modelle analysiert. Die Boole'sche Modellierung führte zu interessanten Erkenntnissen über das Zusammenwirken einzelner Subnetzwerke bei der Vermittlung irreversibler Plättchenaktivierung und zeigte Mechanismen der Interaktion mit dem hemmenden Prostaglandinsystem auf. Das Modell beinhaltet unter Anderem wichtige Systemkomponenten wie Calciumsignalgebung, Aktivierung von Schlüsselkinasen wie Src und PKC, Integrin-vermitteltes outside-in sowie inside-out Signalgebung und autokrine ADP- und Thromboxan-Produktion. Unter Verwendung dieses Boole'schen Ansatzes wurde weiterhin das System-eigene Schwellenwertverhalten analysiert. Dabei stellte sich eine umgekehrt proportionale Abhängigkeit des relativen aktivierenden Reizes, der notwendig ist um den Schwellenwert zu überschreiten, vom absoluten hemmenden Input heraus. Das System adaptiert demnach an höhere Prostaglandinkonzentrationen durch eine Erhöhung der Sensitivität für Aktivatoren wie dem van-Willebrandt-Faktor und Kollagen, und ermöglicht somit auch unter lokal hemmenden Bedingungen eine Plättchen-vermittelte Hämostase. Der nächste Schritt bestand in der Implementierung eines Differentialgleichungs-basierten Modells der thrombozytären Prostaglandin-Signaltransduktion, um einen detaillierten Überblick über die Dynamik des inhibierenden Netzwerkteils zu erhalten. Die kinetischen Parameter dieses Modells wurden teilweise der Literatur entnommen. Der andere Teil wurde anhand einer umfassenden Kombination dosis- und zeitabhängiger cAMP und phospho-VASP Messdaten geschätzt. Der Prozess beinhaltete mehrere Iterationen aus Modellvorhersagen einerseits und experimentellem Design andererseits. Das Modell liefert die quantitativen Effekte der Prostaglandinrezeptoren IP, DP1, EP3 und EP4 und des ADP-Rezeptors P2Y12 auf die zugrunde liegende Signalkaskade. EP4 zeigt den stärksten Effekt in der aktivierenden Fraktion, wohingegen EP3 einen stärkeren inhibitorischen Effekt ausübt, als der durch Clopidogrel hemmbare ADP-Rezeptor P2Y12. Weiterhin wurden die Eigenschaften des negativen feedback-loops der PKA auf den cAMP-Spiegel untersucht, und eine direkte Beeinflussung der Adenylatzyklase durch die PKA festgestellt, in Form einer Reduzierung der maximalen katalytischen Geschwindigkeit. Die Identifizierbarkeit der geschätzten Parameter wurde mittels profile-Likelihood-Schätzung untersucht. In einem dritten Schritt wurde ein sowohl die aktivierenden, als auch die hemmenden Netzwerkteile umfassendes dynamisches Modell implementiert. Die Topologie dieses Modells wurde in Anlehnung an die des Boole'schen Modells auf der Basis von a priori Wissen festgelegt. Die Modellparameter wurden anhand von Western-Blot, Calcium- und Aggregationsmessungen geschätzt. Auch hier wurde die Identifizierbarkeit der Modellparameter durch profile-likelihood-Schätzung überprüft. Die bei niedrigen Ligandenkonzentrationen auftretende Reversibilität der Plättchen-Aggregation konnte mittels dieses Modells reproduziert werden. Jedoch zeigte sich bei mittleren ADP-Konzentrationen ein Fließgleichgewicht in einem teilweise aktivierten Zustand, und damit kein bistabiles Schwellenwertverhalten. Inwiefern dieses Verhalten durch einen Umgebungs-basierteren Mechanismus des Alles-Oder-Nichts-Verhaltens begründet wird, bei dem der Übergang von reversibler zu irreversibler Aggregation mehr durch parakrine Effekte des gesammten Thrombus bestimmt wird, als durch spezifische Signaltransduktionseigenschaften der einzelnen Zelle, müssen zukünftige Experimente zeigen. Insgesamt geben die erstellten Modelle interessante Einblicke in die Funktionsweise der Thrombozyten und ermöglichen die Simulation von pharmakologischen und genetischen Einflüssen, wie Rezeptormodulationen und knock-outs. Sie geben damit Implikationen zur Entstehung und Behandlung pathophysiologischer Zustände, und wertvolle Denkanstöße für die weitere Forschung.
Das Ziel dieser Arbeit ist neben der Synthese von Sol-Gel-Funktionsschichten auf der Basis von
transparent leitfähigen Oxiden (transparent conducting oxides, TCOs) die umfassende infrarotoptische und elektrische Charakterisierung sowie Modellierung dieser Schichten. Es wurden sowohl über klassische Sol-Gel-Prozesse als auch über redispergierte Nanopartikel-Sole spektralselektive Funktionsschichten auf Glas- und Polycarbonat-Substraten appliziert, die einen möglichst hohen Reflexionsgrad im infraroten Spektralbereich und damit einhergehend einen möglichst geringen Gesamtemissionsgrad sowie einen niedrigen elektrischen Flächenwiderstand aufweisen. Zu diesem Zweck wurden dotierte Metalloxide, nämlich einerseits Zinn-dotiertes Indiumoxid (tin doped indium oxide, ITO) und andererseits Aluminium-dotiertes Zinkoxid (aluminum doped zinc oxide, AZO)verwendet. Im Rahmen dieser Arbeit wurden vertieft verschiedene Parameter untersucht, die bei der Präparation von niedrigemittierenden ITO- und AZO-Funktionsschichten im Hinblick auf die Optimierung ihrer infrarot-optischen und elektrischen Eigenschaften sowie ihrer Transmission im sichtbaren Spektralbereich von Bedeutung sind.
Neben der Sol-Zusammensetzung von klassischen Sol-Gel-ITO-Beschichtungslösungen wurden auch
die Beschichtungs- und Ausheizparameter bei der Herstellung von klassischen Sol-Gel-ITO- sowie
-AZO-Funktionsschichten charakterisiert und optimiert. Bei den klassischen Sol-Gel- ITO-Funktionsschichten konnte als ein wesentliches Ergebnis der Arbeit der Gesamtemissionsgrad um 0.18 auf 0.17, bei in etwa gleichbleibenden visuellen Transmissionsgraden und elektrischen Flächenwiderständen, reduziert werden, wenn anstelle von (optimierten) Mehrfach-Beschichtungen Einfach-Beschichtungen mit einer schnelleren Ziehgeschwindigkeit anhand des Dip-Coating-Verfahrens hergestellt wurden. Mit einer klassischen Sol-Gel-ITO-Einfach-Beschichtung, die mit einer deutlich erhöhten Ziehgeschwindigkeit von 600 mm/min gedippt wurde, konnte mit einem Wert von 0.17 der kleinste Gesamtemissionsgrad dieser Arbeit erzielt werden.
Die Gesamtemissionsgrade und elektrischen Flächenwiderstände von klassischen Sol-Gel-AZOFunktionsschichten konnten mit dem in dieser Arbeit optimierten Endheizprozess deutlich gesenkt werden. Bei Neunfach-AZO-Beschichtungen konnten der Gesamtemissionsgrad um 0.34 auf 0.50 und der elektrische Flächenwiderstand um knapp 89 % auf 65 Ω/sq verringert werden. Anhand von Hall-Messungen konnte darüber hinaus nachgewiesen werden, dass mit dem optimierten Endheizprozess, der eine erhöhte Temperatur während der Reduzierung der Schichten aufweist, mit N = 4.3·1019 cm-3 eine etwa doppelt so hohe Ladungsträgerdichte und mit µ = 18.7 cm2/Vs eine etwa drei Mal so große Beweglichkeit in den Schichten generiert wurden, im Vergleich zu jenen Schichten, die nach dem alten Endheizprozess ausgehärtet wurden. Das deutet darauf hin, dass bei dem optimierten Heizschema sowohl mehr Sauerstofffehlstellen und damit eine höhere Ladungsträgerdichte als auch Funktionsschichten mit einem höheren Kristallisationsgrad und damit einhergehend einer höheren Beweglichkeit ausgebildet werden.
Ein Großteil der vorliegenden Arbeit behandelt die Optimierung und Charakterisierung von ITO-Nanopartikel-Solen bzw. -Funktionsschichten. Neben den verwendeten Nanopartikeln, dem
Dispergierungsprozess, der Beschichtungsart sowie der jeweiligen Beschichtungsparameter und der
Nachbehandlung der Funktionsschichten, wurde erstmals in einer ausführlichen Parameterstudie die
Sol-Zusammensetzung im Hinblick auf die Optimierung der infrarot-optischen und elektrischen
Eigenschaften der applizierten Funktionsschichten untersucht. Dabei wurde insbesondere der Einfluss der verwendeten Stabilisatoren sowie der verwendeten Lösungsmittel auf die Schichteigenschaften charakterisiert. Im Rahmen dieser Arbeit wird dargelegt, dass die exakte Zusammensetzung der Nanopartikel-Sole einen große Rolle spielt und die Wahl des verwendeten Lösungsmittels im Sol einen größeren Einfluss auf den Gesamtemissionsgrad und die elektrischen Flächenwiderstände der applizierten Schichten hat als die Wahl des verwendeten Stabilisators. Allerdings wird auch gezeigt, dass keine pauschalen Aussagen darüber getroffen werden können, welcher Stabilisator oder welches Lösungsmittel in den Nanopartikel-Solen zu Funktionsschichten mit kleinen Gesamtemissionsgraden und elektrischen Flächenwiderständen führt. Stattdessen muss jede einzelne Kombination von verwendetem Stabilisator und Lösungsmittel empirisch getestet werden, da jede Kombination zu Funktionsschichten mit anderen Eigenschaften führt.
Zudem konnte im Rahmen dieser Arbeit erstmals stabile AZO-Nanopartikel-Sole über verschiedene Rezepte hergestellt werden.
Neben der Optimierung und Charakterisierung von ITO- und AZO- klassischen Sol-Gel- sowie Nanopartikel-Solen und -Funktionsschichten wurden auch die infrarot-optischen Eigenschaften dieser Schichten modelliert, um die optischen Konstanten sowie die Schichtdicken zu bestimmen. Darüber hinaus wurden auch kommerziell erhältliche, gesputterte ITO- und AZO-Funktionsschichten modelliert. Die Reflexionsgrade dieser drei Funktionsschicht-Typen wurden einerseits ausschließlich mit dem Drude-Modell anhand eines selbstgeschriebenen Programmes in Sage modelliert, und andererseits mit einem komplexeren Fit-Modell, welches in der kommerziellen Software SCOUT aus dem erweiterten Drude-Modell, einem Kim-Oszillator sowie dem OJL-Modell aufgebaut wurde. In diesem Fit-Modell werden auch die Einflüsse der Glas-Substrate auf die Reflexionsgrade der applizierten Funktionsschichten berücksichtigt und es können die optischen Konstanten sowie die Dicken der Schichten ermittelt werden. Darüber hinaus wurde im Rahmen dieser Arbeit ein Ellipsometer installiert und geeignete Fit-Modelle entwickelt, anhand derer die Ellipsometer-Messungen ausgewertet und die optischen Konstanten sowie Schichtdicken der präparierten Schichten bestimmt werden können.
Empathy, the act of sharing another person’s affective state, is a ubiquitous driver for helping others and feeling close to them. These experiences are integral parts of human behavior and society. The studies presented in this dissertation aimed to investigate the sustainability and stability of social closeness and prosocial decision-making driven by empathy and other social motives. In this vein, four studies were conducted in which behavioral and neural indicators of empathy sustainability were identified using model-based functional magnetic resonance imaging (fMRI).
Applying reinforcement learning, drift-diffusion modelling (DDM), and fMRI, the first two studies were designed to investigate the formation and sustainability of empathy-related social closeness (study 1) and examined how sustainably empathy led to prosocial behavior (study 2). Using DDM and fMRI, the last two studies investigated how empathy combined with reciprocity, the social norm to return a favor, on the one hand and empathy combined with the motive of outcome maximization on the other hand altered the behavioral and neural social decision process.
The results showed that empathy-related social closeness and prosocial decision tendencies persisted even if empathy was rarely reinforced. The sustainability of these empathy effects was related to recalibration of the empathy-related social closeness learning signal (study 1) and the maintenance of a prosocial decision bias (study 2). The findings of study 3 showed that empathy boosted the processing of reciprocity-based social decisions, but not vice versa. Study 4 revealed that empathy-related decisions were modulated by the motive of outcome maximization, depending on individual differences in state empathy.
Together, the studies strongly support the concept of empathy as a sustainable driver of social closeness and prosocial behavior.
Irrigated agriculture in the Khorezm region in the arid inner Aral Sea Basin faces enormous challenges due to a legacy of cotton monoculture and non-sustainable water use. Regional crop growth monitoring and yield estimation continuously gain in importance, especially with regard to climate change and food security issues. Remote sensing is the ideal tool for regional-scale analysis, especially in regions where ground-truth data collection is difficult and data availability is scarce. New satellite systems promise higher spatial and temporal resolutions. So-called light use efficiency (LUE) models are based on the fraction of photosynthetic active radiation absorbed by vegetation (FPAR), a biophysical parameter that can be derived from satellite measurements. The general objective of this thesis was to use satellite data, in conjunction with an adapted LUE model, for inferring crop yield of cotton and rice at field (6.5 m) and regional (250 m) scale for multiple years (2003-2009), in order to assess crop yield variations in the study area. Intensive field measurements of FPAR were conducted in the Khorezm region during the growing season 2009. RapidEye imagery was acquired approximately bi-weekly during this time. The normalized difference vegetation index (NDVI) was calculated for all images. Linear regression between image-based NDVI and field-based FPAR was conducted. The analyses resulted in high correlations, and the resulting regression equations were used to generate time series of FPAR at the RapidEye level. RapidEye-based FPAR was subsequently aggregated to the MODIS scale and used to validate the existing MODIS FPAR product. This step was carried out to evaluate the applicability of MODIS FPAR for regional vegetation monitoring. The validation revealed that the MODIS product generally overestimates RapidEye FPAR by about 6 to 15 %. Mixture of crop types was found to be a problem at the 1 km scale, but less severe at the 250 m scale. Consequently, high resolution FPAR was used to calibrate 8-day, 250 m MODIS NDVI data, this time by linear regression of RapidEye-based FPAR against MODIS-based NDVI. The established FPAR datasets, for both RapidEye and MODIS, were subsequently assimilated into a LUE model as the driving variable. This model operated at both satellite scales, and both required an estimation of further parameters like the photosynthetic active radiation (PAR) or the actual light use efficiency (LUEact). The latter is influenced by crop stress factors like temperature or water stress, which were taken account of in the model. Water stress was especially important, and calculated via the ratio of the actual (ETact) to the potential, crop-specific evapotranspiration (ETc). Results showed that water stress typically occurred between the beginning of May and mid-September and beginning of May and end of July for cotton and rice crops, respectively. The mean water stress showed only minor differences between years. Exceptions occurred in 2008 and 2009, where the mean water stress was higher and lower, respectively. In 2008, this was likely caused by generally reduced water availability in the whole region. Model estimations were evaluated using field-based harvest information (RapidEye) and statistical information at district level (MODIS). The results showed that the model at both the RapidEye and the MODIS scale can estimate regional crop yield with acceptable accuracy. The RMSE for the RapidEye scale amounted to 29.1 % for cotton and 30.4 % for rice, respectively. At the MODIS scale, depending on the year and evaluated at Oblast level, the RMSE ranged from 10.5 % to 23.8 % for cotton and from -0.4 % to -19.4 % for rice. Altogether, the RapidEye scale model slightly underestimated cotton (bias = 0.22) and rice yield (bias = 0.11). The MODIS-scale model, on the other hand, also underestimated official rice yield (bias from 0.01 to 0.87), but overestimated official cotton yield (bias from -0.28 to -0.6). Evaluation of the MODIS scale revealed that predictions were very accurate for some districts, but less for others. The produced crop yield maps indicated that crop yield generally decreases with distance to the river. The lowest yields can be found in the southern districts, close to the desert. From a temporal point of view, there were areas characterized by low crop yields over the span of the seven years investigated. The study at hand showed that light use efficiency-based modeling, based on remote sensing data, is a viable way for regional crop yield prediction. The found accuracies were good within the boundaries of related research. From a methodological viewpoint, the work carried out made several improvements to the existing LUE models reported in the literature, e.g. the calibration of FPAR for the study region using in situ and high resolution RapidEye imagery and the incorporation of crop-specific water stress in the calculation.
Current changes of biodiversity result almost exclusively from human activities. This anthropogenic conversion of natural ecosystems during the last decades has led to the so-called ‘biodiversity crisis’, which comprises the loss of species as well as changes in the global distribution patterns of organisms. Species richness is unevenly distributed worldwide. Altogether, 17 so-called ‘megadiverse’ nations cover less than 10% of the earth’s land surface but support nearly 70% of global species richness. Mexico, the study area of this thesis, is one of those countries. However, due to Mexico’s large extent and geographical complexity, it is impossible to conduct reliable and spatially explicit assessments of species distribution ranges based on these collection data and field work alone. In the last two decades, Species distribution models (SDMs) have been established as important tools for extrapolating such in situ observations. SDMs analyze empirical correlations between geo-referenced species occurrence data and environmental variables to obtain spatially explicit surfaces indicating the probability of species occurrence. Remote sensing can provide such variables which describe biophysical land surface characteristics with high effective spatial resolutions. Especially during the last three to five years, the number of studies making use of remote sensing data for modeling species distributions has therefore multiplied. Due to the novelty of this field of research, the published literature consists mostly of selective case studies. A systematic framework for modeling species distributions by means of remote sensing is still missing. This research gap was taken up by this thesis and specific studies were designed which addressed the combination of climate and remote sensing data in SDMs, the suitability of continuous remote sensing variables in comparison with categorical land cover classification data, the criteria for selecting appropriate remote sensing data depending on species characteristics, and the effects of inter-annual variability in remotely sensed time series on the performance of species distribution models. The corresponding novel analyses were conducted with the Maximum Entropy algorithm developed by Phillips et al. (2004). In this thesis, a more comprehensive set of remote sensing predictors than in the existing literature was utilized for species distribution modeling. The products were selected based on their ecological relevance for characterizing species distributions. Two 1 km Terra-MODIS Land 16-day composite standard products including the Enhanced Vegetation Index (EVI), Reflectance Data, and Land Surface Temperature (LST) were assembled into enhanced time series for the time period of 2001 to 2009. These high-dimensional time series data were then transformed into 18 phenological and 35 statistical metrics that were selected based on an extensive literature review. Spatial distributions of twelve tree species were modeled in a hierarchical framework which integrated climate (WorldClim) and MODIS remote sensing data. The species are representative of the major Mexican forest types and cover a variety of ecological traits, such as range size and biotope specificity. Trees were selected because they have a high probability of detection in the field and since mapping vegetation has a long tradition in remote sensing. The result of this thesis showed that the integration of remote sensing data into species distribution models has a significant potential for improving and both spatial detail and accuracy of the model predictions.
Internet applications are becoming more and more flexible to support diverge user demands and network conditions. This is reflected by technical concepts, which provide new adaptation mechanisms to allow fine grained adjustment of the application quality and the corresponding bandwidth requirements. For the case of video streaming, the scalable video codec H.264/SVC allows the flexible adaptation of frame rate, video resolution and image quality with respect to the available network resources. In order to guarantee a good user-perceived quality (Quality of Experience, QoE) it is necessary to adjust and optimize the video quality accurately. But not only have the applications of the current Internet changed. Within network and transport, new technologies evolved during the last years providing a more flexible and efficient usage of data transport and network resources. One of the most promising technologies is Network Virtualization (NV) which is seen as an enabler to overcome the ossification of the Internet stack. It provides means to simultaneously operate multiple logical networks which allow for example application-specific addressing, naming and routing, or their individual resource management. New transport mechanisms like multipath transmission on the network and transport layer aim at an efficient usage of available transport resources. However, the simultaneous transmission of data via heterogeneous transport paths and communication technologies inevitably introduces packet reordering. Additional mechanisms and buffers are required to restore the correct packet order and thus to prevent a disturbance of the data transport. A proper buffer dimensioning as well as the classification of the impact of varying path characteristics like bandwidth and delay require appropriate evaluation methods. Additionally, for a path selection mechanism real time evaluation mechanisms are needed. A better application-network interaction and the corresponding exchange of information enable an efficient adaptation of the application to the network conditions and vice versa. This PhD thesis analyzes a video streaming architecture utilizing multipath transmission and scalable video coding and develops the following optimization possibilities and results: Analysis and dimensioning methods for multipath transmission, quantification of the adaptation possibilities to the current network conditions with respect to the QoE for H.264/SVC, and evaluation and optimization of a future video streaming architecture, which allows a better interaction of application and network.
Die Apoptose der Leberzellen ist abhängig von externen Signalen wie beispielsweise Komponenten der Extrazellulären Matrix sowie anderen Zell-Zell-Kontakten, welche von einer Vielfalt und Vielzahl an Knoten verarbeitet werden. Einige von ihnen wurden im Rahmen dieser Arbeit auf ihre Systemeffekte hin unter- sucht. Trotz verschiedener äußerer Einflüsse und natürlicher Selektion ist das System daraufhin optimiert, eine kleine Anzahl verschiedener und klar voneinander unterscheidbarer Systemzustände anzunehmen. Die verschiedenartigen Einflüsse und Crosstalk-Mechanismen dienen der Optimierung der vorhandenen Systemzustände. Das in dieser Arbeit vorgestellte Modell zeigt zwei apoptotische sowie zwei nicht-apoptotische stabile Systemzustände, wobei der Grad der Aktivierung eines Knotens bis zu dem Moment stark variieren kann, in welchem der absolute Systemzustand selbst verändert wird (Philippi et al., BMC Systems Biology,2009) [1]. Dieses Modell stellt zwar eine Vereinfachung des gesamten zellulären Netzwerkes und seiner verschiedenen Zustände dar, ist aber trotz allem in der Lage, unabhängig von detaillierten kinetischen Daten und Parametern der einzelnen Knoten zu agieren. Gleichwohl erlaubt das Modell mit guter qualitativer Übereinstimmung die Apoptose als Folge einer Stimulation mit FasL zu modellieren. Weiterhin umfasst das Modell sowohl Crosstalk-Möglichkeiten des Collagen-Integrin-Signalwegs, ebenso berücksichtigt es die Auswirkungen der genetischen Deletion von Bid sowie die Konsequenzen einer viralen Infektion. In einem zweiten Teil werden andere Anwendungsmöglichkeiten dargestellt. Hormonale Signale in Pflanzen, Virusinfektionen und intrazelluläre Kommunikation werden semi-quantitativ modelliert. Auch hier zeigte sich eine gute Ubereinstimmung der Modelle mit den experimentellen Daten.