Refine
Year of publication
- 2019 (770) (remove)
Document Type
- Journal article (561)
- Doctoral Thesis (163)
- Book article / Book chapter (23)
- Preprint (19)
- Conference Proceeding (1)
- Other (1)
- Report (1)
- Working Paper (1)
Language
- English (770) (remove)
Keywords
- Animal Studies (24)
- Cultural Animal Studies (24)
- Cultural Studies (24)
- Ecocriticism (24)
- Environmental Humanities (24)
- Human-Animal Studies (24)
- Literary Studies (24)
- boron (11)
- apoptosis (9)
- cancer (7)
Institute
- Theodor-Boveri-Institut für Biowissenschaften (107)
- Graduate School of Life Sciences (51)
- Physikalisches Institut (47)
- Medizinische Klinik und Poliklinik II (37)
- Institut für Psychologie (34)
- Neurologische Klinik und Poliklinik (34)
- Medizinische Klinik und Poliklinik I (31)
- Institut für Anorganische Chemie (30)
- Institut für Organische Chemie (28)
- Institut für deutsche Philologie (24)
Schriftenreihe
Sonstige beteiligte Institutionen
- VolkswagenStiftung (24)
- Johns Hopkins School of Medicine (2)
- Bio-Imaging Center Würzburg (1)
- CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - the development agency of the Brazilian Federal Government (1)
- Center for Nanosystems Chemistry (CNC), Universität Würzburg (1)
- DAAD - Deutscher Akademischer Austauschdienst (1)
- Department of Hematology and Oncology, Sana Hospital Hof, Hof, Germany (1)
- Department of Laboratory Medicine and Medicine Huddinge, Karolinska Institutet and University Hospital, Stockholm, Sweden (1)
- Department of Medicine A, University Hospital of Münster, Münster, Germany (1)
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society (ESI) (1)
ResearcherID
- B-4606-2017 (1)
We present a theoretical study on exciton–exciton annihilation (EEA) in a molecular dimer. This process is monitored using a fifth-order coherent two-dimensional (2D) spectroscopy as was recently proposed by Dostál et al. [Nat. Commun. 9, 2466 (2018)]. Using an electronic three-level system for each monomer, we analyze the different paths which contribute to the 2D spectrum. The spectrum is determined by two entangled relaxation processes, namely, the EEA and the direct relaxation of higher lying excited states. It is shown that the change of the spectrum as a function of a pulse delay can be linked directly to the presence of the EEA process.
Cataglyphis ants are famous for their navigational abilities. They live in hostile habitats where they forage as solitary scavengers covering distances of more than hundred thousand times their body lengths. To return to their nest with a prey item – mainly other dead insects that did not survive the heat – Cataglyphis ants constantly keep track of their directions and distances travelled. The navigational strategy is called path integration, and it enables an ant to return to the nest in a straight line using its home vector. Cataglyphis ants mainly rely on celestial compass cues, like the position of the sun or the UV polarization pattern, to determine directions, and they use an idiothetic step counter and optic flow to measure distances. In addition, they acquire information about visual, olfactory and tactile landmarks, and the wind direction to increase their chances of returning to the nest safe and sound. Cataglyphis’ navigational performance becomes even more impressive if one considers their life style. Most time of their lives, the ants stay underground and perform tasks within the colony. When they start their foraging careers outside the nest, they have to calibrate their compass systems and acquire all information necessary for navigation during subsequent foraging. This navigational toolkit is not instantaneously available, but has to be filled with experience. For that reason, Cataglyphis ants perform a striking behavior for up to three days before actually foraging. These so-called learning walks are crucial for the success as foragers later on. In the present thesis, both the ontogeny and the fine-structure of learning walks has been investigated. Here I show with displacement experiments that Cataglyphis ants need enough space and enough time to perform learning walks. Spatially restricted novices, i. e. naïve ants, could not find back to the nest when tested as foragers later on. Furthermore, ants have to perform several learning walks over 1-3 days to gain landmark information for successful homing as foragers. An increasing number of feeder visits also increases the importance of landmark information, whereas in the beginning ants fully rely on their path-integration vector. Learning walks are well-structured. High-speed video analysis revealed that Cataglyphis ants include species-specific rotational elements in their learning walks. Greek Cataglyphis ants (C. noda and C. aenescens) inhabiting a cluttered pine forest perform voltes, small walked circles, and pirouettes, tight turns about the body axis with frequent stopping phases. During the longest stopping phases, the ants gaze back to their nest entrance. The Tunisian Cataglyphis fortis ants inhabiting featureless saltpans only perform voltes without directed gazes. The function of voltes has not yet been revealed. In contrast, the fine structure of pirouettes suggests that the ants take snapshots of the panorama towards their homing direction to memorize the nest’s surroundings. The most likely hypothesis was that Cataglyphis ants align the gaze directions using their path integrator, which gets directional input from celestial cues during foraging. To test this hypothesis, a manipulation experiment was performed changing the celestial cues above the nest entrance (no sun, no natural polarization pattern, no UV light). The accurately directed gazes to the nest entrance offer an easily quantifiable readout suitable to ask the ants where they expect their nest entrance. Unexpectedly, all novices performing learning walks under artificial sky conditions looked back to the nest entrance. This was especially surprising, because neuronal changes in the mushroom bodies and the central complex receiving visual input could only be induced with the natural sky when comparing test animals with interior workers. The behavioral findings indicated that Cataglyphis ants use another directional reference system to align their gaze directions during the longest stopping phases of learning walk pirouettes. One possibility was the earth’s magnetic field. Indeed, already disarraying the geomagnetic field at the nest entrance with an electromagnetic flat coil indicated that the ants use magnetic information to align their looks back to the nest entrance. To investigate this finding further, ants were confronted with a controlled magnetic field using a Helmholtz coil. Elimination of the horizontal field component led to undirected gaze directions like the disarray did. Rotating the magnetic field about 90°, 180° or -90° shifted the ants’ gaze directions in a predictable manner. Therefore, the earth’s magnetic field is a necessary and sufficient reference system for aligning nest-centered gazes during learning-walk pirouettes. Whether it is additionally used for other navigational purposes, e. g. for calibrating the solar ephemeris, remains to be tested. Maybe the voltes performed by all Cataglyphis ant species investigated so far can help to answer this question..
Energy efficiency of computing systems has become an increasingly important issue over the last decades. In 2015, data centers were responsible for 2% of the world's greenhouse gas emissions, which is roughly the same as the amount produced by air travel.
In addition to these environmental concerns, power consumption of servers in data centers results in significant operating costs, which increase by at least 10% each year.
To address this challenge, the U.S. EPA and other government agencies are considering the use of novel measurement methods in order to label the energy efficiency of servers.
The energy efficiency and power consumption of a server is subject to a great number of factors, including, but not limited to, hardware, software stack, workload, and load level.
This huge number of influencing factors makes measuring and rating of energy efficiency challenging. It also makes it difficult to find an energy-efficient server for a specific use-case. Among others, server provisioners, operators, and regulators would profit from information on the servers in question and on the factors that affect those servers' power consumption and efficiency. However, we see a lack of measurement methods and metrics for energy efficiency of the systems under consideration.
Even assuming that a measurement methodology existed, making decisions based on its results would be challenging. Power prediction methods that make use of these results would aid in decision making. They would enable potential server customers to make better purchasing decisions and help operators predict the effects of potential reconfigurations.
Existing energy efficiency benchmarks cannot fully address these challenges, as they only measure single applications at limited sets of load levels. In addition, existing efficiency metrics are not helpful in this context, as they are usually a variation of the simple performance per power ratio, which is only applicable to single workloads at a single load level. Existing data center efficiency metrics, on the other hand, express the efficiency of the data center space and power infrastructure, not focusing on the efficiency of the servers themselves. Power prediction methods for not-yet-available systems that could make use of the results provided by a comprehensive power rating methodology are also lacking. Existing power prediction models for hardware designers have a very fine level of granularity and detail that would not be useful for data center operators.
This thesis presents a measurement and rating methodology for energy efficiency of servers and an energy efficiency metric to be applied to the results of this methodology. We also design workloads, load intensity and distribution models, and mechanisms that can be used for energy efficiency testing. Based on this, we present power prediction mechanisms and models that utilize our measurement methodology and its results for power prediction.
Specifically, the six major contributions of this thesis are:
We present a measurement methodology and metrics for energy efficiency rating of servers that use multiple, specifically chosen workloads at different load levels for a full system characterization.
We evaluate the methodology and metric with regard to their reproducibility, fairness, and relevance. We investigate the power and performance variations of test results and show fairness of the metric through a mathematical proof and a correlation analysis on a set of 385 servers. We evaluate the metric's relevance by showing the relationships that can be established between metric results and third-party applications.
We create models and extraction mechanisms for load profiles that vary over time, as well as load distribution mechanisms and policies. The models are designed to be used to define arbitrary dynamic load intensity profiles that can be leveraged for benchmarking purposes. The load distribution mechanisms place workloads on computing resources in a hierarchical manner.
Our load intensity models can be extracted in less than 0.2 seconds and our resulting models feature a median modeling error of 12.7% on average. In addition, our new load distribution strategy can save up to 10.7% of power consumption on a single server node.
We introduce an approach to create small-scale workloads that emulate the power consumption-relevant behavior of large-scale workloads by approximating their CPU performance counter profile, and we introduce TeaStore, a distributed, micro-service-based reference application. TeaStore can be used to evaluate power and performance model accuracy, elasticity of cloud auto-scalers, and the effectiveness of power saving mechanisms for distributed systems.
We show that we are capable of emulating the power consumption behavior of realistic workloads with a mean deviation less than 10% and down to 0.2 watts (1%). We demonstrate the use of TeaStore in the context of performance model extraction and cloud auto-scaling also showing that it may generate workloads with different effects on the power consumption of the system under consideration.
We present a method for automated selection of interpolation strategies for performance and power characterization. We also introduce a configuration approach for polynomial interpolation functions of varying degrees that improves prediction accuracy for system power consumption for a given system utilization.
We show that, in comparison to regression, our automated interpolation method selection and configuration approach improves modeling accuracy by 43.6% if additional reference data is available and by 31.4% if it is not.
We present an approach for explicit modeling of the impact a virtualized environment has on power consumption and a method to predict the power consumption of a software application. Both methods use results produced by our measurement methodology to predict the respective power consumption for servers that are otherwise not available to the person making the prediction.
Our methods are able to predict power consumption reliably for multiple hypervisor configurations and for the target application workloads. Application workload power prediction features a mean average absolute percentage error of 9.5%.
Finally, we propose an end-to-end modeling approach for predicting the power consumption of component placements at run-time. The model can also be used to predict the power consumption at load levels that have not yet been observed on the running system.
We show that we can predict the power consumption of two different distributed web applications with a mean absolute percentage error of 2.2%. In addition, we can predict the power consumption of a system at a previously unobserved load level and component distribution with an error of 1.2%.
The contributions of this thesis already show a significant impact in science and industry. The presented efficiency rating methodology, including its metric, have been adopted by the U.S. EPA in the latest version of the ENERGY STAR Computer Server program. They are also being considered by additional regulatory agencies, including the EU Commission and the China National Institute of Standardization. In addition, the methodology's implementation and the underlying methodology itself have already found use in several research publications.
Regarding future work, we see a need for new workloads targeting specialized server hardware. At the moment, we are witnessing a shift in execution hardware to specialized machine learning chips, general purpose GPU computing, FPGAs being embedded into compute servers, etc. To ensure that our measurement methodology remains relevant, workloads covering these areas are required. Similarly, power prediction models must be extended to cover these new scenarios.
The attitude and orbit control system of pico- and nano-satellites to date is one of the bottle necks for future scientific and commercial applications. A performance increase while keeping with the satellites’ restrictions will enable new space missions especially for the smallest of the CubeSat classes. This work addresses methods to measure and improve the satellite’s attitude pointing and orbit control performance based on advanced sensor data analysis and optimized on-board software concepts. These methods are applied to spaceborne satellites and future CubeSat missions to demonstrate their validity. An in-orbit calibration procedure for a typical CubeSat attitude sensor suite is developed and applied to the UWE-3 satellite in space. Subsequently, a method to estimate the attitude determination accuracy without the help of an external reference sensor is developed. Using this method, it is shown that the UWE-3 satellite achieves an in-orbit attitude determination accuracy of about 2°.
An advanced data analysis of the attitude motion of a miniature satellite is used in order to estimate the main attitude disturbance torque in orbit. It is shown, that the magnetic disturbance is by far the most significant contribution for miniature satellites and a method to estimate the residual magnetic dipole moment of a satellite is developed. Its application to three CubeSats currently in orbit reveals that magnetic disturbances are a common issue for this class of satellites. The dipole moments measured are between 23.1mAm² and 137.2mAm². In order to autonomously estimate and counteract this disturbance in future missions an on-board magnetic dipole estimation algorithm is developed.
The autonomous neutralization of such disturbance torques together with the simplification of attitude control for the satellite operator is the focus of a novel on-board attitude control software architecture. It incorporates disturbance torques acting on the satellite and automatically optimizes the control output. Its application is demonstrated in space on board of the UWE-3 satellite through various attitude control experiments of which the results are presented here.
The integration of a miniaturized electric propulsion system will enable CubeSats to perform orbit control and, thus, open up new application scenarios. The in-orbit characterization, however, poses the problem of precisely measuring very low thrust levels in the order of µN. A method to measure this thrust based on the attitude dynamics of the satellite is developed and evaluated in simulation. It is shown, that the demonstrator mission UWE-4 will be able to measure these thrust levels with a high accuracy of 1% for thrust levels higher than 1µN.
The orbit control capabilities of UWE-4 using its electric propulsion system are evaluated and a hybrid attitude control system making use of the satellite’s magnetorquers and the electric propulsion system is developed. It is based on the flexible attitude control architecture mentioned before and thrust vector pointing accuracies of better than 2° can be achieved. This results in a thrust delivery of more than 99% of the desired acceleration in the target direction.
The aim of this thesis was the application of the functional prepolymer NCO-sP(EO-stat-PO) for the development of new biomaterials. First, the influence of the star-shaped polymers on the mechanical properties of biocements and bone adhesives was investigated. 3-armed star-shaped macromers were used as an additive for a mineral bone cement, and the influence on the mechanical properties was studied. Additionally, a previously developed bone adhesive was examined regarding cytocompatibility. The second topic was the examination of novel functionalization steps which were performed on the surface of electrospun fibers modified with NCO-sP(EO-stat-PO). This established method of functionalizing electrospun meshes was advanced regarding the modification with proteins which was then demonstrated in a biological application. Two different kinds of antibodies were immobilized on the fiber surface in a consecutive manner and the influence of these proteins on the cell behavior was investigated. The final topic involved the quantification of surface-bound peptide sequences. By functionalization of the peptides with the UV-reactive molecule 2-mercaptopyridine it was possible to quantify this compound via UV measurements by cleavage of disulfide bridges and indirectly draw conclusions about the number of immobilized peptides.
In the field of mineral biocements and bone adhesives, NCO-sP(EO-stat-PO) was able to influence the setting behavior and mechanical performance of mineral bone cements based on calcium phosphate chemistry. The addition of NCO-sP(EO-stat-PO) resulted in a pseudo-ductile fracture behavior due to the formation of a hydrogel network in the cement, which was then mineralized by nanosized hydroxyapatite crystals following cement setting. Accordingly, a commercially available aluminum silicate cement from civil engineering could be modified.
In addition, it could be shown that the use of NCO-sP(EO-stat-PO) is beneficial for adjusting specific material properties of bone adhesives. Here, the crosslinking behavior of the prepolymer in an aqueous medium was exploited to form an interpenetrating network (IPN) together with a photochemically curing poly(ethylene glycol) dimethacrylate (PEGDMA) matrix. This could be used for the development of a bone adhesive with an improved adhesion to bone in a wet environment. The developed bone adhesive was further investigated in terms of possible influences of the initiator systems. In addition, the material system was tested for cytocompatibility by using different cell lines.
Moreover, the preparation of electrospun fiber meshes via solution electrospinning consisting of poly(lactide-co-glycolide) (PLGA) as a backbone polymer and NCO-sP(EO-stat-PO) as functional additive is an established method for the application of the meshes as a replacement of the native extracellular matrix (ECM). In general, these fibers reveal diameters in the nanometer range, are protein and cell repellent due to the hydrophilic properties of the prepolymer and show a specific biofunctionalization by immobilization of peptide sequences. Here, the isocyanate groups presented on the fiber surface after electrospinning were used to carry out various functionalization steps, while retaining the properties of protein and cell repellency. The modification of the electrospun fibers involved the immobilization of analogs or antagonists of tumor necrosis factor (TNF) and the indirect detection of these by interaction with a light-producing enzyme. Here, a multimodal modification of the fiber surface with RGD to mediate cell adhesion and two different antibodies could be achieved. After culturing the cell line HT1080, the pro- or anti-inflammatory response of cells could be detected by IL-8 specific ELISA measurements.
Furthermore, the quantification of molecules on the surface of electrospun fibers was investigated. It was tested whether the detection by means of super-resolution microscopy would be possible. Therefore, experiments were performed with short amino acid sequences such as RGD for quantification by fluorescence microscopy. Based on earlier results, in which a UV-spectrometrically active molecule was used to detect the quantification of RGD, it was shown that short peptides can also be quantified in a small scale on flat functional substrates (2D) such as NCO-sP(EO-stat-PO) hydrogel coatings, and modified electrospun fibers produced from PLGA and NCO-sP(EO-stat-PO) (3D). In addition, a collagen sequence was used to prove that a successful quantification can be carried out as well for longer peptide chains.
These studies have revealed that NCO-sP(EO-stat-PO) can serve as a functional additive for many applications and should be considered for further studies on the development of novel biomaterials. The rapid crosslinking reaction, the resulting hydrogel formation and the biocompatibility are to be mentioned as positive properties, which makes the prepolymer interesting for future applications.
The present thesis describes the development of a strategy to create discrete finite-sized supramolecular stacks of merocyanine dyes. Thus, bichromophoric stacks of two identical or different chromophores could be realized by folding of bis(merocyanine) dyes and their optical properties were discussed in terms of exciton theory. Quantum chemical calculations revealed strong exciton coupling between the chromophores within the homo- and hetero-π-stacks and the increase of the J-band of the hetero-dimers with increasing energy difference between the excited states of the chromophores could be attributed not only to the different magnitudes of transition dipole moments of the chromophores but also to the increased localization of the excitation in the respective exciton state. Furthermore, careful selection of the length of the spacer unit that defines the interplanar distance between the tethered chromophores directed the self-assembly of the respective bis(merocyanines) into dimers, trimers and tetramers comprising large, structurally precise π-stacks of four, six or eight merocyanine chromophores. It could be demonstrated that the structure of such large supramolecular architectures can be adequately elucidated by commonly accessible analysis tools, in particular NMR techniques in combination with UV/vis measurements and mass spectrometry. Supported by TDDFT calculations, the absorption spectra of the herein investigated aggregates could be explained and a relationship between the absorption properties and the number of stacking chromophores could be established based on exciton theory.
Mechanistic Insights into the Inhibition of Cathepsin B and Rhodesain with Low-Molecular Inhibitors
(2019)
Cysteine proteases play a crucial role in medical chemistry concerning various fields reaching from more common ailments like cancer and hepatitis to less noted tropical diseases, namely the so-called African Sleeping Sickness (Human Arfican Trypanosomiasis). Detailed knowledge about the catalytic function of these systems is highly desirable for drug research in the respective areas. In this work, the inhibition mechanisms of the two cysteine proteases cathepsin B and rhodesain with respectively one low-molecular inhibitor class were investigated in detail, using computational methods. In order to sufficiently describe macromolecular systems, molecular mechanics based methods (MM) and quantum mechanical based method (QM), as well as hybrid methods (QM/MM) combining those two approaches, were applied.
For Cathespin B, carbamate-based molecules were investigated as potential inhibitors for the cysteine protease. The results indicate, that water-bridged proton-transfer reactions play a crucial role for the inhibition. The energetically most favoured pathway (according to the calculations) includes an elimination reaction following an E1cB mechanism with a subsequent carbamylation of the active site amino acid cysteine.
Nitroalkene derivatives were investigated as inhibitors for rhodesain. The investigation of structurally similar inhibitors showed, that even small steric differences can crucially influence the inhibition potential of the components. Furthermore, the impact of a fluorination of the nitroalkene inhibitors on the inhibition mechanism was investigated. According to experimental data measured from the working group of professor Schirmeister in Mainz, fluorinated nitroalkenes show – in contrast to the unfluorinated compounds – a time dependent inhibition efficiency. The calculations of the systems indicate, that the fluorination impacts the non-covalent interactions of the inhibitors with the enzymatic environment of the enzyme which results in a different inhibition behaviour.
The measurement of the mass of the $W$ boson is currently one of the most promising precision analyses of the Standard Model, that could ultimately reveal a hint for new physics.
The mass of the $W$ boson is determined by comparing the $W$ boson, which cannot be reconstructed directly, to the $Z$ boson, where the full decay signature is available. With the help of Monte Carlo simulations one can extrapolate from the $Z$ boson to the $W$ boson.
Technically speaking, the measurement of the $W$ boson mass is performed by comparing data taken by the ATLAS experiment to a set of calibrated Monte Carlo simulations, which reflect different mass hypotheses.\
A dedicated calibration of the reconstructed objects in the simulations is crucial for a high precision of the measured value.
The comparison of simulated $Z$ boson events to reconstructed $Z$ boson candidates in data allows to derive event weights and scale factors for the calibration.
This thesis presents a new approach to reweight the hadronic recoil in the simulations. The focus of the calibration is on the average hadronic activity visible in the mean of the scalar sum of the hadronic recoil $\Sigma E_T$ as a function of pileup. In contrast to the standard method, which directly reweights the scalar sum, the dependency to the transverse boson momentum is less strongly affected here.
The $\Sigma E_T$ distribution is modeled first by means of its pileup dependency. Then, the remaining differences in the resolution of the vector sum of the hadronic recoil are scaled. This is done separately for the parallel and the pterpendicular component of the hadronic recoil with respect to the reconstructed boson.
This calibration was developed for the dataset taken by the ATLAS experiment at a center of mass energy of $8\,\textrm{TeV}$ in 2012. In addition, the same reweighting procedure is applied to the recent dataset with a low pileup contribution, the \textit{lowMu} runs at $5\,\textrm{TeV}$ and at $13\,\textrm{TeV}$, taken by ATLAS in November 2017. The dedicated aspects of the reweighting procedure are presented in this thesis. It can be shown that this reweighting approach improves the agreement between data and the simulations effectively for all datasets.
The uncertainties of this reweighting approach as well as the statistical errors are evaluated for a $W$ mass measurement by a template fit to pseudodata for the \textit{lowMu} dataset. A first estimate of these uncertainties is given here. For the pfoEM algorithm a statistical uncertainty of $17\,\text{MeV}$ for the $5\,\textrm{TeV}$ dataset and of $18\,\text{MeV}$ for the $13\,\textrm{TeV}$ are found for the $W \rightarrow \mu \nu$ analysis. The systematic uncertainty introduced by the resolution scaling has the largest effect, a value of $15\,\text{MeV}$ is estimated for the $13\,\textrm{TeV}$ dataset in the muon channel.
The present dissertation investigates the management of RFID implementations in retail trade. Our work contributes to this by investigating important aspects that have so far received little attention in scientific literature. We therefore perform three studies about three important aspects of managing RFID implementations. We evaluate in our first study customer acceptance of pervasive retail systems using privacy calculus theory. The results of our study reveal the most important aspects a retailer has to consider when implementing pervasive retail systems. In our second study we analyze RFID-enabled robotic inventory taking with the help of a simulation model. The results show that retailers should implement robotic inventory taking if the accuracy rates of the robots are as high as the robots’ manufacturers claim. In our third and last study we evaluate the potentials of RFID data for supporting managerial decision making. We propose three novel methods in order to extract useful information from RFID data and propose a generic information extraction process. Our work is geared towards practitioners who want to improve their RFID-enabled processes and towards scientists conducting RFID-based research.
Sensitivity and selectivity remain the central technical requirement for analytical devices, detectors and sensors. Especially in the gas phase, concentrations of threat substances can be very low (e.g. explosives) or have severe effects on health even at low concentrations (e.g. benzene) while it contains many potential interferents. Preconcentration, facilitated by active or passive sampling of air by an adsorbent, followed by thermal desorption, results in these substances being released in a smaller volume, effectively increasing their concentration.
Traditionally, a wide range of adsorbents, such as active carbons or porous polymers, are used for preconcentration. However, many adsorbents either show chemical reactions due to active surfaces, serious water retention or high background emission due to thermal instability. Metal-organic frameworks (MOFs) are a hybrid substance class, composed inorganic and organic building blocks, being a special case of coordination polymers containing pores. They can be tailored for specific applications such as gas storage, separation, catalysis, sensors or drug delivery.
This thesis is focused on investigating MOFs for their use in thermal preconcentration for airborne detection systems. A pre-screening method for MOF-adsorbate interactions was developed and applied, namely inverse gas chromatography (iGC). Using this pulse chromatographic method, the interaction of MOFs and molecules from the class of explosives and volatile organic compounds was studied at different temperatures and compared to thermal desorption results.
In the first part, it is shown that archetype MOFs (HKUST-1, MIL-53 and Fe-BTC) outperformed the state-of-the-art polymeric adsorbent Tenax® TA in nitromethane preconcentration for a 1000 (later 1) ppm nitromethane source. For HKUST-1, a factor of more than 2000 per g of adsorbent was achieved, about 100 times higher than for Tenax. Thereby, a nitromethane concentration of 1 ppb could be increased to 2 ppm. High enrichment is addressed to the specific interaction of the nitro group as by iGC, which was determined by comparing nitromethane’s free enthalpy of adsorption with the respective saturated alkane. Also, HKUST-1 shows a similar mode of sorption (enthalpy-entropy compensation) for nitro and saturated alkanes.
In the second part, benzene of 1 ppm of concentration was enriched with a similar setup, using 2nd generation MOFs, primarily UiO-66 and UiO-67, under dry and humid (50 %rH) conditions using constant sampling times. Not any MOF within the study did surpass the polymeric Tenax in benzene preconcentration. This is most certainly due to low sampling times – while Tenax may be highly saturated after 600 s, MOFs are not. For regular UiO-66, four differently synthesized samples showed a strongly varying behavior for dry and humid enrichment which cannot be completely explained. iGC investigations with regular alkanes and BTEX compounds revealed that confinement factors and dispersive surface energy were different for all UiO-66 samples. Using physicochemical parameters from iGC, no unified hypothesis explaining all variances could be developed.
Altogether, it was shown that MOFs can replace or add to state-of-the-art adsorbents for the enrichment of specific analytes with preconcentration being a universal sensitivity-boosting concept for detectors and sensors. Especially with iGC as a powerful screening tool, most suitable MOFs for the respective target analyte can be evaluated. iGC can be used for determining “single point” retention volumes, which translate into partition coefficients for a specific MOF × analyte × temperature combination.