Graduate School of Science and Technology
Refine
Has Fulltext
- yes (49)
Is part of the Bibliography
- yes (49)
Year of publication
Document Type
- Doctoral Thesis (49)
Keywords
- Femtosekundenspektroskopie (4)
- Kleinsatellit (4)
- Rastertunnelmikroskopie (4)
- Ultrakurzzeitspektroskopie (4)
- Polymere (3)
- 3D-Druck (2)
- Bildverarbeitung (2)
- Biomaterial (2)
- Computertomografie (2)
- Data Fusion (2)
Institute
- Graduate School of Science and Technology (49)
- Institut für Informatik (6)
- Physikalisches Institut (4)
- Institut für Funktionsmaterialien und Biofabrikation (3)
- Institut für Geographie und Geologie (3)
- Institut für Physikalische und Theoretische Chemie (3)
- Fakultät für Chemie und Pharmazie (1)
- Fakultät für Humanwissenschaften (Philos., Psycho., Erziehungs- u. Gesell.-Wissensch.) (1)
- Fakultät für Physik und Astronomie (1)
- Institut für Anorganische Chemie (1)
Sonstige beteiligte Institutionen
- Technische Hochschule Nürnberg Georg Simon Ohm (2)
- Experimental Radiation Oncology, Department of Radiation Oncology, University Medical Center Mannheim (1)
- Fraunhofer Institut für Integrierte Schaltungen (IIS) (1)
- Fraunhofer Institut für Silicatforschung ISC (1)
- Hochschule Wismar (1)
- Institut für Medizintechnik Schweinfurt (IMeS) (1)
EU-Project number / Contract (GA) number
- 320377 (1)
Die Entstehung kollinearer und nicht-kollinearer Spinstrukturen wird auf verschiedene magnetische Wechselwirkungen zurückgeführt. Für Anwendungen in der Medizin und in der Datenspeicherung ist es notwendig zu verstehen, unter welchen Parametern Frustrationen auftreten, um diese entweder zu vermeiden oder zu nutzen. In dieser Arbeit werden kollineare und nicht-kollineare Spinstrukturen auf zwei verschiedenen Materialsystemen untersucht. Das erste Materialsystem besteht aus drei atomaren Lagen Mangan auf einer (001) Oberfläche eines Wolfram-Einkristalls und das zweite Materialsystem enthält Mangan, welches verbunden mit Sauerstoff kettenförmig auf einer (001) Oberfläche eines Iridium-Einkristalls vorliegt.
Spinpolarisierte Rastertunnelmikroskopie (SP-RTM)-Messungen und -Simulationen der
dreilagigen, pseudomorphen Manganoberfläche ergeben eine nicht-kollineare Spinstruktur. Dichtefunktionaltheorie (DFT)-Berechnungen legen eine kollineare ↑↓↓-
Spinkonstellation nahe. Unter Berücksichtigung der chiralen biquadratischen Paarwechselwirkung befinden sich konische Spinspiralen mit kleinem Öffnungswinkel nah an dem energetisch niedrigsten Zustand. Spinaufgelöste DFT-Berechnungen sind abhängig von der genäherten, geometrischen Relaxation der atomaren Struktur. Kombinierte SP-RTM-Methoden weisen auf einem dreilagigen Materialsystem Spinspiralen nach und zufolge der DFT ist der kollineare bzw. nicht-kollineare Zustand des Systems durch den Abstand seiner Lagen bedingt.
SP-RTM-Messungen auf den Manganoxidketten weisen je nach Präparation eine kollineare antiferromagnetische (AFM) oder eine nicht-kollineare Spinstruktur nach. Zudem wird präsentiert, dass sich diese Spinstrukturen durch zwei verschiedene Sauerstoffdrücke und die Zufuhr von Wärme während der Präparation ineinander umschalten lassen. Durch niederenergetische Elektronenbeugung mit variabler Spannung werden zwei atomare Strukturen bestimmt, welche sich durch ihren Oxidationsgrad unterscheiden. Die nicht-kollineare Spinstruktur ist bereits in der Fachliteratur als 120° chirale Spinspirale, verursacht durch die Dzyaloshinskii-Moriya-verstärkte Ruderman-Kittel-Kasuya-Yosida (RKKY)-Wechselwirkung, bekannt. Nach aktuellen, kollinearen DFT-Berechnungen ist die kollineare Spinstruktur als AFM entlang der Ketten und als ferromagnetische Kopplung zwischen den Ketten ermittelt. Aufgrund des Nachweises eines höheren Oxidationsgrades wird eine stärkere RKKY-Austauschwechselwirkung auf der Basis der Heisenberg-Austauschwechselwirkung vermutet. Hier korreliert die Entstehung kollinearer oder nicht-kollinearer Spinstrukturen mit dem Oxidationsgrad.
The production of commodities such as cocoa, rubber, oil palm and cashew, is the main driver of deforestation in West Africa (WA). The practiced production systems correspond to a land managment approach referred to as agroforestry systems (AFS), which consist of managing trees and crops on the same unit of land.Because of the ubiquity of trees, AFS reported as viable solution for climate mitigation; the carbon sequestrated by the trees could be estimated with remote sensing (RS) data and methods and reported as emission reduction efforts. However, the diversity in AFS in relation to their composition, structure and spatial distribution makes it challenging for an accurate monitoring of carbon stocks using RS. Therefore, the aim of this research is to propose a RS-based approach for the estimation of carbon sequestration in AFS across the climatic regions of WA. The main objectives were to (i) provide an accurate classification map of AFS by modelling the spatial distribution of the classification error; (ii) estimate the carbon stock of AFS in the main climatic regions of WA using RS data; (iii) evaluate the dynamic of carbon stocks within AFS across WA. Three regions of interest (ROI) were defined in Cote d'Ivoire and Burkina Faso, one in each climatic region of WA namely the Guineo-Congolian, Guinean and Sudanian, and three field campaigns were carried out for data collection. The collected data consisted of reference points for image classification, biometric tree measurements (diameter, height, species) for biomass estimation. A total of 261 samples were collected in 12 AFS across WA. For the RS data, yearly composite images from Sentinel-1 and -2 (S1 and S2), ALOS-PALSAR and GEDI data were used. A supervised classification using random forest (RF) was implemented and the classification error was assessed using the Shannon entropy generated from the class probabilities. For carbon estimation, different RS data, machine learning algorithms and carbon reference sources were compared for the prediction of the aboveground biomass in AFS. The assessment of the carbon dynamic was carried between 2017 and 2021. An average carbon map was genrated and use as reference for the comparison of annual carbon estimations, using the standard deviation as threshold. As far as the results are concerned, the classification accuracy was higher than 0.9 in all the ROIs, and AFS were mainly represented by rubber (38.9%), cocoa (36.4%), palm (10.8%) in the ROI-1, mango (15.2%) and cashew (13.4%) in ROI-2, shea tree (55.7%) and African locust bean (28.1%) in ROI-3. However, evidence of misclassification was found in cocoa, mango, and shea butter. The assessment of the classification error suggested that the error level was higher in the ROI-3 and ROI-1. The error generated from the entropy was able to reduced the level of misclassification by 63% with 11% of loss of information. Moreover, the approach was able to accuretely detect encroachement in protected areas. On carbon estimation, the highest prediction accuracy (R²>0.8) was obtained for a RF model using the combination of S1 and S2 and AGB derived from field measurements. Predictions from GEDI could only be used as reference in the ROI-1 but resulted in a prediction error was higher in cashew, mango, rubber and cocoa plantations, and the carbon stock level was higher in African locust bean (43.9 t/ha), shea butter (15 t/ha), cashew (13.8 t/ha), mango (12.8 t/ha), cocoa (7.51 t/ha) and rubber (7.33 t/ha). The analysis showed that carbon stock is determined mainly by the diameter (R²=0.45) and height (R²=0.13) of trees. It was found that crop plantations had the lowest biodiversity level, and no significant relationship was found between the considered biodiversity indices and carbon stock levels. The assessment of the spatial distribution of carbon sources and sinks showed that cashew plantations are carbon emitters due to firewood collection, while cocoa plantations showed the highest potential for carbon sequestration. The study revealed that Sentinel data could be used to support a RS-based approach for modelling carbon sequestration in AFS. Entropy could be used to map crop plantations and to monitor encroachment in protected areas. Moreover, field measurements with appropriate allometric models could ensure an accurate estimation of carbon stocks in AFS. Even though AFS in the Sudanian region had the highest carbon stocks level, there is a high potential to increase the carbon level in cocoa plantations by integrating and/or maintaining forest trees.
Accurate crop monitoring in response to climate change at a regional or field scale plays a significant role in developing agricultural policies, improving food security, forecasting, and analysing global trade trends. Climate change is expected to significantly impact agriculture, with shifts in temperature, precipitation patterns, and extreme weather events negatively affecting crop yields, soil fertility, water availability, biodiversity, and crop growing conditions. Remote sensing (RS) can provide valuable information combined with crop growth models (CGMs) for yield assessment by monitoring crop development, detecting crop changes, and assessing the impact of climate change on crop yields. This dissertation aims to investigate the potential of RS data on modelling long-term crop yields of winter wheat (WW) and oil seed rape (OSR) for the Free State of Bavaria (70,550 km2), Germany. The first chapter of the dissertation describes the reasons favouring the importance of accurate crop yield predictions for achieving sustainability in agriculture. Chapter second explores the accuracy assessment of the synthetic RS data by fusing NDVIs of two high spatial resolution data (high pair) (Landsat (30 m, 16-days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low spatial resolution data (low pair) (MOD13Q1 (250 m, 16-days), MCD43A4 (500 m, one day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, 8-days)) using the spatial and temporal adaptive reflectance fusion model (STARFM), which fills regions' cloud or shadow gaps without losing spatial information. The chapter finds that both L-MOD13Q1 (R2 = 0.62, RMSE = 0.11) and S-MOD13Q1 (R2 = 0.68, RMSE = 0.13) are more suitable for agricultural monitoring than the other synthetic products fused. Chapter third explores the ability of the synthetic spatiotemporal datasets (obtained in chapter 2) to accurately map and monitor crop yields of WW and OSR at a regional scale. The chapter investigates and discusses the optimal spatial (10 m, 30 m, or 250 m), temporal (8 or 16-day) and CGMs (World Food Studies (WOFOST), and the semi-empiric light use efficiency approach (LUE)) for accurate crop yield estimations of both crop types. Chapter third observes that the observations of high temporal resolution (8-day) products of both S-MOD13Q1 and L-MOD13Q1 play a significant role in accurately measuring the yield of WW and OSR. The chapter investigates that the simple light use efficiency (LUE) model (R2 = 0.77 and relative RMSE (RRMSE) = 8.17%) that required fewer input parameters to simulate crop yield is highly accurate, reliable, and more precise than the complex WOFOST model (R2 = 0.66 and RRMSE = 11.35%) with higher input parameters. Chapter four researches the relationship of spatiotemporal fusion modelling using STRAFM on crop yield prediction for WW and OSR using the LUE model for Bavaria from 2001 to 2019. The chapter states the high positive correlation coefficient (R) = 0.81 and R = 0.77 between the yearly R2 of synthetic accuracy and modelled yield accuracy for WW and OSR from 2001 to 2019, respectively. The chapter analyses the impact of climate variables on crop yield predictions by observing an increase in R2 (0.79 (WW)/0.86 (OSR)) and a decrease in RMSE (4.51/2.57 dt/ha) when the climate effect is included in the model. The fifth chapter suggests that the coupling of the LUE model to the random forest (RF) model can further reduce the relative root mean square error (RRMSE) from -8% (WW) and -1.6% (OSR) and increase the R2 by 14.3% (for both WW and OSR), compared to results just relying on LUE. The same chapter concludes that satellite-based crop biomass, solar radiation, and temperature are the most influential variables in the yield prediction of both crop types. Chapter six attempts to discuss both pros and cons of RS technology while analysing the impact of land use diversity on crop-modelled biomass of WW and OSR. The chapter finds that the modelled biomass of both crops is positively impacted by land use diversity to the radius of 450 (Shannon Diversity Index ~0.75) and 1050 m (~0.75), respectively. The chapter also discusses the future implications by stating that including some dependent factors (such as the management practices used, soil health, pest management, and pollinators) could improve the relationship of RS-modelled crop yields with biodiversity. Lastly, chapter seven discusses testing the scope of new sensors such as unmanned aerial vehicles, hyperspectral sensors, or Sentinel-1 SAR in RS for achieving accurate crop yield predictions for precision farming. In addition, the chapter highlights the significance of artificial intelligence (AI) or deep learning (DL) in obtaining higher crop yield accuracies.
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Accurate crop monitoring in response to climate change at a regional or field scale
plays a significant role in developing agricultural policies, improving food security,
forecasting, and analysing global trade trends. Climate change is expected to
significantly impact agriculture, with shifts in temperature, precipitation patterns, and
extreme weather events negatively affecting crop yields, soil fertility, water availability,
biodiversity, and crop growing conditions. Remote sensing (RS) can provide valuable
information combined with crop growth models (CGMs) for yield assessment by
monitoring crop development, detecting crop changes, and assessing the impact of
climate change on crop yields. This dissertation aims to investigate the potential of RS
data on modelling long-term crop yields of winter wheat (WW) and oil seed rape (OSR)
for the Free State of Bavaria (70,550 km2
), Germany. The first chapter of the dissertation
describes the reasons favouring the importance of accurate crop yield predictions for
achieving sustainability in agriculture. Chapter second explores the accuracy
assessment of the synthetic RS data by fusing NDVIs of two high spatial resolution data
(high pair) (Landsat (30 m, 16-days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low
spatial resolution data (low pair) (MOD13Q1 (250 m, 16-days), MCD43A4 (500 m, one
day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, 8-days)) using the spatial
and temporal adaptive reflectance fusion model (STARFM), which fills regions' cloud
or shadow gaps without losing spatial information. The chapter finds that both L-MOD13Q1 (R2 = 0.62, RMSE = 0.11) and S-MOD13Q1 (R2 = 0.68, RMSE = 0.13) are more
suitable for agricultural monitoring than the other synthetic products fused. Chapter
third explores the ability of the synthetic spatiotemporal datasets (obtained in chapter
2) to accurately map and monitor crop yields of WW and OSR at a regional scale. The
chapter investigates and discusses the optimal spatial (10 m, 30 m, or 250 m), temporal
(8 or 16-day) and CGMs (World Food Studies (WOFOST), and the semi-empiric light
use efficiency approach (LUE)) for accurate crop yield estimations of both crop types.
Chapter third observes that the observations of high temporal resolution (8-day)
products of both S-MOD13Q1 and L-MOD13Q1 play a significant role in accurately
measuring the yield of WW and OSR. The chapter investigates that the simple light use
efficiency (LUE) model (R2 = 0.77 and relative RMSE (RRMSE) = 8.17%) that required fewer input parameters to simulate crop yield is highly accurate, reliable, and more
precise than the complex WOFOST model (R2 = 0.66 and RRMSE = 11.35%) with higher
input parameters. Chapter four researches the relationship of spatiotemporal fusion
modelling using STRAFM on crop yield prediction for WW and OSR using the LUE
model for Bavaria from 2001 to 2019. The chapter states the high positive correlation
coefficient (R) = 0.81 and R = 0.77 between the yearly R2 of synthetic accuracy and
modelled yield accuracy for WW and OSR from 2001 to 2019, respectively. The chapter
analyses the impact of climate variables on crop yield predictions by observing an
increase in R2
(0.79 (WW)/0.86 (OSR)) and a decrease in RMSE (4.51/2.57 dt/ha) when
the climate effect is included in the model. The fifth chapter suggests that the coupling
of the LUE model to the random forest (RF) model can further reduce the relative root
mean square error (RRMSE) from -8% (WW) and -1.6% (OSR) and increase the R2 by
14.3% (for both WW and OSR), compared to results just relying on LUE. The same
chapter concludes that satellite-based crop biomass, solar radiation, and temperature
are the most influential variables in the yield prediction of both crop types. Chapter six
attempts to discuss both pros and cons of RS technology while analysing the impact of
land use diversity on crop-modelled biomass of WW and OSR. The chapter finds that
the modelled biomass of both crops is positively impacted by land use diversity to the
radius of 450 (Shannon Diversity Index ~0.75) and 1050 m (~0.75), respectively. The
chapter also discusses the future implications by stating that including some dependent
factors (such as the management practices used, soil health, pest management, and
pollinators) could improve the relationship of RS-modelled crop yields with
biodiversity. Lastly, chapter seven discusses testing the scope of new sensors such as
unmanned aerial vehicles, hyperspectral sensors, or Sentinel-1 SAR in RS for achieving
accurate crop yield predictions for precision farming. In addition, the chapter highlights
the significance of artificial intelligence (AI) or deep learning (DL) in obtaining higher
crop yield accuracies.
Motivated by the perceived great potential of chiral polymers, the presented work aimed at the investigation of synthesis, solubility and optical activity of chiral poly(2,4-disubstituted-2-oxazoline)s. A novel polymeric carrier based on ABA-type triblock copolymers poly(2-oxazoline)s with chiral and racemic hydrophobic blocks was developed for the formulation of chiral and achiral drugs (Fig. 5.1). Poly(2-methyl-2-oxazoline) (pMeOx) was used as hydrophilic A block, and poly(2-ethyl-4-ethyl-2-oxazoline) (pEtEtOx) and poly(2-propyl-4-methyl-2-oxazoline) (pPrMeOx) were used as hydrophobic B blocks. Curcumin (CUR), paclitaxel (PTX) and chiral/racemic ibuprofen (R/S/RS-IBU) were applied as model drugs. Nanoformulations were prepared consisting of these triblock copolymers and model drugs. ...
Verschiedene Konzepte der Röntgenmikroskopie haben sich mittlerweile im Labor etabliert und ermöglichen heute aufschlussreiche Einblicke in eine Vielzahl von Probensystemen. Der „Labormaßstab“ bezieht sich dabei auf Analysemethoden, die in Form von einem eigenständigen Gerät betrieben werden können. Insbesondere sind sie unabhängig von der Strahlerzeugung an einer Synchrotron-Großforschungseinrichtung und einem sonst kilometergroßen Elektronen-speicherring. Viele der technischen Innovationen im Labor sind dabei ein Transfer der am Synchrotron entwickelten Techniken. Andere wiederum basieren auf der konsequenten Weiterentwicklung etablierter Konzepte. Die Auflösung allein ist dabei nicht entscheidend für die spezifische Eignung eines Mikroskopiesystems im Ganzen. Ebenfalls sollte das zur Abbildung eingesetzte Energiespektrum auf das Probensystem abgestimmt sein. Zudem muss eine Tomographieanalage zusätzlich in der Lage sein, die Abbildungsleistung bei 3D-Aufnahmen zu konservieren.
Nach einem Überblick über verschiedene Techniken der Röntgenmikroskopie konzentriert sich die vorliegende Arbeit auf quellbasierte Nano-CT in Projektionsvergrößerung als vielversprechende Technologie zur Materialanalyse. Hier können höhere Photonenenergien als bei konkurrierenden Ansätzen genutzt werden, wie sie von stärker absorbierenden Proben, z. B. mit einem hohen Anteil von Metallen, zur Untersuchung benötigt werden. Das bei einem ansonsten idealen CT-Gerät auflösungs- und leistungsbegrenzende Bauteil ist die verwendete Röntgen-quelle. Durch konstruktive Innovationen sind hier die größten Leistungssprünge zu erwarten. In diesem Zuge wird erörtert, ob die Brillanz ein geeignetes Maß ist, um die Leistungsfähigkeit von Röntgenquellen zu evaluieren, welchen Schwierigkeiten die praktische Messung unterliegt und wie das die Vergleichbarkeit der Werte beeinflusst. Anhand von Monte-Carlo-Simulationen wird gezeigt, wie die Brillanz verschiedener Konstruktionen an Röntgenquellen theoretisch bestimmt und miteinander verglichen werden kann. Dies wird am Beispiel von drei modernen Konzepten von Röntgenquellen demonstriert, welche zur Mikroskopie eingesetzt werden können. Im Weiteren beschäftigt sich diese Arbeit mit den Grenzen der Leistungsfähigkeit von Transmissionsröntgenquellen. Anhand der verzahnten Simulation einer Nanofokus-Röntgenquelle auf Basis von Monte-Carlo und FEM-Methoden wird untersucht, ob etablierte Literatur¬modelle auf die modernen Quell-konstruktionen noch anwendbar sind. Aus den Simulationen wird dann ein neuer Weg abgeleitet, wie die Leistungsgrenzen für Nanofokus-Röntgenquellen bestimmt werden können und welchen Vorteil moderne strukturierte Targets dabei bieten.
Schließlich wird die Konstruktion eines neuen Nano-CT-Gerätes im Labor-maßstab auf Basis der zuvor theoretisch besprochenen Nanofokus-Röntgenquelle und Projektionsvergrößerung gezeigt, sowie auf ihre Leistungsfähigkeit validiert. Es ist spezifisch darauf konzipiert, hochauflösende Messungen an Materialsystemen in 3D zu ermöglichen, welche mit bisherigen Methoden limitiert durch mangelnde Auflösung oder Energie nicht umsetzbar waren. Daher wird die praktische Leistung des Gerätes an realen Proben und Fragestellungen aus der Material¬wissenschaft und Halbleiterprüfung validiert. Speziell die gezeigten Messungen von Fehlern in Mikrochips aus dem Automobilbereich waren in dieser Art zuvor nicht möglich.
In produzierenden Unternehmen werden verschiedene Vorgehensweisen zur Planung, Überwachung und Steuerung von Produktionsabläufen eingesetzt. Einer dieser Methoden wird als Vorgangsknotennetzplantechnik bezeichnet. Die einzelnen Produktionsschritte werden als Knoten definiert und durch Pfeile miteinander verbunden. Die Pfeile stellen die Beziehungen der jeweiligen Vorgänge zueinander und damit den Produktionsablauf dar. Diese Technik erlaubt den Anwendern einen umfassenden Überblick über die einzelnen Prozessrelationen. Zusätzlich können mit ihr Vorgangszeiten und Produktfertigstellungszeiten ermittelt werden, wodurch eine ausführliche Planung der Produktion ermöglicht wird. Ein Nachteil dieser Technik begründet sich in der alleinigen Darstellung einer ausführbaren Prozessabfolge. Im Falle eines Störungseintritts mit der Folge eines nicht durchführbaren Vorgangs muss von dem originären Prozess abgewichen werden. Aufgrund dessen wird eine Neuplanung erforderlich. Es werden Alternativen für den gestörten Vorgang benötigt, um eine Fortführung des Prozesses ungeachtet der Störung zu erreichen. Innerhalb dieser Arbeit wird daher eine Erweiterung der Vorgangsknotennetzplantechnik beschrieben, die es erlaubt, ergänzend zu dem geplanten Soll-Prozess Alternativvorgänge für einzelne Vorgänge darzulegen. Diese Methode wird als Maximalnetzplan bezeichnet. Die Alternativen werden im Falle eines Störungseintritts automatisch evaluiert und dem Anwender in priorisierter Reihenfolge präsentiert. Durch die Verwendung des Maximalnetzplans kann eine aufwendige Neuplanung vermieden werden. Als Anwendungsbeispiel dient ein Montageprozess, mithilfe dessen die Verwendbarkeit der Methode dargelegt wird. Weiterführend zeigt eine zeitliche Analyse zufallsbedingter Maximalnetzpläne eine Begründung zur Durchführung von Alternativen und damit den Nutzen des Maximalnetzplans auf. Zusätzlich sei angemerkt, dass innerhalb dieser Arbeit verwendete Begrifflichkeiten wie Anwender, Werker oder Mitarbeiter in maskuliner Schreibweise niedergeschrieben werden. Dieses ist ausschließlich der Einfachheit geschuldet und nicht dem Zweck der Diskriminierung anderer Geschlechter dienlich. Die verwendete Schreibweise soll alle Geschlechter ansprechen, ob männlich, weiblich oder divers.
Metallic nanostructures possess the ability to support resonances in the visible wavelength regime which are related to localized surface plasmons. These create highly enhanced electric fields in the immediate vicinity of metal surfaces. Nanoparticles with dipolar resonance also radiate efficiently into the far-field and hence serve as antennas for light. Such optical antennas have been explored during the last two decades, however, mainly as standalone units illuminated by external laser beams and more recently as electrically driven point sources, yet merely with basic antenna properties. This work advances the state of the art of locally driven optical antenna systems. As a first instance, the electric driving scheme including inelastic electron tunneling over a nanometer gap is merged with Yagi-Uda theory. The resulting antenna system consists of a suitably wired feed antenna, incorporating a tunnel junction, as well as several nearby parasitic elements whose geometry is optimized using analytical and numerical methods. Experimental evidence of unprecedented directionality of light emission from a nanoantenna is provided. Parallels in the performance between radiofrequency and optical Yagi-Uda arrays are drawn. Secondly, a pair of electrically connected antennas with dissimilar resonances is harnessed as electrodes in an organic light emitting nanodiode prototype. The organic material zinc phthalocyanine, exhibiting asymmetric injection barriers for electrons and holes, in conjunction with the electrode resonances, allows switching and controlling the emitted peak wavelength and directionality as the polarity of the applied voltage is inverted. In a final study, the near-field based transmission-line driving of rod antenna systems is thoroughly explored. Perfect impedance matching, corresponding to zero back-reflection, is achieved when the antenna acts as a generalized coherent perfect absorber at a specific frequency. It thus collects all guided, surface-plasmon mediated input power and transduces it to other nonradiative and radiative dissipation channels. The coherent interplay of losses and interference effects turns out to be of paramount importance for this delicate scenario, which is systematically obtained for various antenna resonances. By means of the here developed semi-analytical toolbox, even more complex nanorod chains, supporting topologically nontrivial localized edge states, are studied. The results presented in this work facilitate the design of complex locally driven antenna systems for optical wireless on-chip communication, subwavelength pixels, and loss-compensated integrated plasmonic nanocircuitry which extends to the realm of topological plasmonics.
Development, Simulation and Evaluation of Mobile Wireless Networks in Industrial Applications
(2023)
Manyindustrialautomationsolutionsusewirelesscommunicationandrelyontheavail-
ability and quality of the wireless channel. At the same time the wireless medium is
highly congested and guaranteeing the availability of wireless channels is becoming
increasingly difficult. In this work we show, that ad-hoc networking solutions can be
used to provide new communication channels and improve the performance of mobile
automation systems. These ad-hoc networking solutions describe different communi-
cation strategies, but avoid relying on network infrastructure by utilizing the Peer-to-
Peer (P2P) channel between communicating entities.
This work is a step towards the effective implementation of low-range communication
technologies(e.g. VisibleLightCommunication(VLC), radarcommunication, mmWave
communication) to the industrial application. Implementing infrastructure networks
with these technologies is unrealistic, since the low communication range would neces-
sitate a high number of Access Points (APs) to yield full coverage. However, ad-hoc
networks do not require any network infrastructure. In this work different ad-hoc net-
working solutions for the industrial use case are presented and tools and models for
their examination are proposed.
The main use case investigated in this work are Automated Guided Vehicles (AGVs)
for industrial applications. These mobile devices drive throughout the factory trans-
porting crates, goods or tools or assisting workers. In most implementations they must
exchange data with a Central Control Unit (CCU) and between one another. Predicting
if a certain communication technology is suitable for an application is very challenging
since the applications and the resulting requirements are very heterogeneous.
The proposed models and simulation tools enable the simulation of the complex inter-
action of mobile robotic clients and a wireless communication network. The goal is to
predict the characteristics of a networked AGV fleet.
Theproposedtoolswereusedtoimplement, testandexaminedifferentad-hocnetwork-
ing solutions for industrial applications using AGVs. These communication solutions
handle time-critical and delay-tolerant communication. Additionally a control method
for the AGVs is proposed, which optimizes the communication and in turn increases the
transport performance of the AGV fleet. Therefore, this work provides not only tools
for the further research of industrial ad-hoc system, but also first implementations of
ad-hoc systems which address many of the most pressing issues in industrial applica-
tions.
Ongoing changes in spaceflight – continuing miniaturization, declining costs of rocket launches and satellite components, and improved satellite computing and control capabilities – are advancing Satellite Formation Flying (SFF) as a research and application area. SFF enables new applications that cannot be realized (or cannot be realized at a reasonable cost) with conventional single-satellite missions. In particular, distributed Earth observation applications such as photogrammetry and tomography or distributed space telescopes require precisely placed and controlled satellites in orbit.
Several enabling technologies are required for SFF, such as inter-satellite communication, precise attitude control, and in-orbit maneuverability. However, one of the most important requirements is a reliable distributed Guidance, Navigation and Control (GNC) strategy. This work addresses the issue of distributed GNC for SFF in 3D with a focus on Continuous Low-Thrust (CLT) propulsion satellites (e.g., with electric thrusters) and concentrates on circular low Earth orbits. However, the focus of this work is not only on control theory, but control is considered as part of the system engineering process of typical small satellite missions. Thus, common sensor and actuator systems are analyzed to derive their characteristics and their impacts on formation control. This serves as the basis for the design, implementation, and evaluation of the following control approaches: First, a Model Predictive Control (MPC) method with specific adaptations to SFF and its requirements and constraints; second, a distributed robust controller that combines consensus methods for distributed system control and $H_{\infty}$ robust control; and finally, a controller that uses plant inversion for control and combines it with a reference governor to steer the controller to the target on an optimal trajectory considering several constraints. The developed controllers are validated and compared based on extensive software simulations. Realistic 3D formation flight scenarios were taken from the Networked Pico-Satellite Distributed System Control (NetSat) cubesat formation flight mission. The three compared methods show different advantages and disadvantages in the different application scenarios. The distributed robust consensus-based controller for example lacks the ability to limit the maximum thrust, so it is not suitable for satellites with CLT. But both the MPC-based approach and the plant inversionbased controller are suitable for CLT SFF applications, while showing again distinct advantages and disadvantages in different scenarios.
The scientific contribution of this work may be summarized as the creation of novel and specific control approaches for the class of CLT SFF applications, which is still lacking methods withstanding the application in real space missions, as well as the scientific evaluation and comparison of the developed methods.
Metallic nano-optical systems allow to confine and guide light at the nanoscale,
a fascinating ability which has motivated a wide range of fundamental as well
as applied research over the last two decades. While optical antennas provide
a link between visible radiation and localized energy, plasmonic waveguides
route light in predefined pathways. So far, however, most experimental demonstrations
are limited to purely optical excitations, i.e. isolated structures are
illuminated by external lasers. Driving such systems electrically and generating
light at the nanoscale, would greatly reduce the device footprint and pave the
road for integrated optical nanocircuitry. Yet, the light emission mechanism as
well as connecting delicate nanostructures to external electrodes pose key challenges
and require sophisticated fabrication techniques. This work presents various
electrically connected nano-optical systems and outlines a comprehensive
production line, thus significantly advancing the state of the art. Importantly,
the electrical connection is not just used to generate light, but also offers new
strategies for device assembly. In a first example, nanoelectrodes are selectively
functionalized with self-assembled monolayers by charging a specific electrode.
This allows to tailor the surface properties of nanoscale objects, introducing an
additional degree of freedom to the development of metal-organic nanodevices.
In addition, the electrical connection enables the bottom-up fabrication of tunnel
junctions by feedback-controlled dielectrophoresis. The resulting tunnel barriers
are then used to generate light in different nano-optical systems via inelastic
electron tunneling. Two structures are discussed in particular: optical Yagi-Uda
antennas and plasmonic waveguides. Their refined geometries, accurately fabricated
via focused ion beam milling of single-crystalline gold platelets, determine
the properties of the emitted light. It is shown experimentally, that Yagi-Uda
antennas radiate light in a specific direction with unprecedented directionality,
while plasmonic waveguides allow to switch between the excitation of two
propagating modes with orthogonal near-field symmetry. The presented devices
nicely demonstrate the potential of electrically connected nano-optical systems,
and the fabrication scheme including dielectrophoresis as well as site-selective
functionalization will inspire more research in the field of nano-optoelectronics.
In this context, different future experiments are discussed, ranging from the
control of molecular machinery to optical antenna communication.
Motivated by the great potential offered by the combination of additive manufacturing technology and hydrogels, especially in the field of tissue engineering and regenerative medicine, a series of novel hybrid hydrogel inks were developed based on the recently described thermogelling poly(2-oxazoline)s-block-poly(2-oxazine)s diblock copolymers, which may help to expand the platform of available hydrogel inks for this transformative 3D printing technology (Fig. 5.1).
In the present thesis, the first reported thermogelling polymer solely consisting of POx and POzi, i.e., the diblock copolymer PMeOx-b-PnPrOzi comprising a hydrophilic block (PMeOx) and a thermoresponsive block (PnPrOzi), was selected and used as a proof-of-concept for the preparation of three novel hybrid hydrogels. Therefore, three batches of the diblock copolymers with a DP of 100 were synthesized for the study of three different hybrid hydrogels with a special focus on their suitability as (bio)inks for extrusion-based 3D printing. The PMeOx-b-PnPrOzi diblock copolymer solution shows a temperature induced reversible gelation behavior above a critical polymer concentration of 20 wt%, as described for the Pluronic F127 solution but with a unique gelation mechanism, working through the formation of a bicontinuous sponge-like structure from the physically crosslinked vesicles. Specially, its intrinsic shear thinning behavior and excellent recovery property with a certain yield point make it a promising ink candidate for extrusion-based printing technology.
Increasing the polymer concentration is the most traditional approach to improve the printability of an ink material, and serve as the major strategy available to improve the printability of PMeOx-b-PnPrOzi systems prior to this work. From the analysis of rheological properties related to printability, it came a conclusion that increasing the copolymer concentration does improve the hydrogel strength and thus the printability. However, such improvement is very limited and usually leads to other problems such as more viscous systems and stringent requirements on the printers, which are not ideal for the printing process and applications especially in the cell-embedded biofabrication field.
POx-b-POzi/clay Hybrid Hydrogel
An alternative method proposed to improve the printability of this thermoresponsive hydrogel ink is through nanoclay (Laponite XLG) addition, i.e., the first hybrid hydrogel system of PMeOx-b-PnPrOzi/clay (also named shortly as POx-b-POzi/clay) in this thesis. To optimize the viscoelastic properties of the ink material, Laponite XLG acted as a reinforcement additive and a physically crosslinker was blended with the copolymers. Compared with the pristine copolymer solution of PMeOx-b-PnPrOzi, the hybrid PMeOx-b-PnPrOzi/clay solution well retained the temperature induced gelation performance of the copolymers.
The obtained hybrid hydrogels exhibited a rapid in situ reversible thermogelation at a physiological relevant Tgel of around 15 ℃ and a rapid recovery of viscoelastic properties within a few seconds. More importantly, with the addition of only a small amount of 1.2 wt% clay, it exhibited obviously enhanced shear thinning character (n = 0.02), yield stress (240 Pa) and mechanical strength (storage modulus over 5 kPa). With this novel hybrid hydrogel, real three-dimensional constructs with multiple layers and various geometries are generation with greatly enhanced shape fidelity and resolution. In this context, the thermogelling properties of the hybrid hydrogels over a copolymer concentration range of 10-20 wt% and a clay concentration of 0-4 wt% were systematically investigated, and from which a printable window was obtained from the laboratory as a reference.
In fact, the printing performance of an ink is not only determined by the intrinsic physicochemical properties of the material, but is also influenced by the external printing environments as well as the printer parameter settings. All the printing experiments in this study were conducted under a relatively optimized conditions obtained from preliminary experiments. In future work, the relationship between material rheology properties, printer parameters and printing performance could be systematically explored. Such a fundamental study will help to develop models that allows the prediction and comparison of printing results from different researches based on the parameters available through rheology, which is very beneficial for further development of more advanced ink systems.
Although the printability has been significantly improved by the addition of nanoclay Laponite XLG, the hybrid hydrogels and their printed constructs still suffer from some major limitations. For example, these materials are still thermoresponsive, which will cause the printed constructs to collapse when the environment temperature changes below their Tgel. In addition, the formed hydrogel constructs are mechanical too weak for load-bearing applications, and the allowed incubation time is very limited during media exchange/addition as it will lead to dissolution of the hydrogels due to dilution effects. Therefore, it is essential to establish a second (chemical or physical) crosslinking mechanism that allows further solidification of the gels after printing. It should be kept in mind that the second crosslinking step will eliminate the thermoresponsive behavior of the gels and thus the possibility of cell recovery. In this case, besides through the traditional approach of copolymer modification to realize further crosslinking, like one of the well-known post-polymerization modification approach Diels-Alder reaction,[430] designing of interpenetrating networks (IPN) hydrogels serves as one of the major strategy for advanced (bio)ink preparation.[311] Therefore, the second hybrid hydrogel system of PMeOx-b-PnPrOzi/PDMAA/clay (also named shortly as POx-b-POzi/PDMAA/clay) was developed in this thesis, which is a 3D printable and highly stretchable ternary organic-inorganic IPN hydrogel.
POx-b-POzi/PDMAA/clay Hybrid Hydrogel
The nanocomposite IPN hydrogel combines a thermoresponsive hydrogel with clay described above and in situ polymerized poly(N, N-dimethylacrylamide). Before in situ polymerization, the thermoresponsive hydrogel precursors exhibited thermogelling behavior (Tgel ~ 25 ℃, G' ~ 6 kPa) and shear thinning properties, making the system well-suited for extrusion-based 3D printing. After chemical curing of the 3D-printed constructs by free radical polymerization, the resulting IPN hydrogels show excellent mechanical strength with a high stretchability to a tensile strain at break exceeding 550%. The hybrid hydrogel can sustain a high stretching deformation and recover quickly due to the energy dissipation from the non-covalent interactions. With this hybrid hydrogel, integrating with the advanced 3D-printing technique, various 3D constructs can be printed and cured successfully with high shape fidelity and geometric accuracy.
In this context, we also investigated the possibility of acrylic acid (AA) and 2-hydroxyethylmethacrylate (HEMA) as alternative hydrogel precursors. However, the addition of these two monomers affected the thermogelation of POx-b-POzi in an unfavorable manner, as these monomers competed more effectively with water molecules, preventing the hydration of nPrOzi block at lower temperatures and therefore, the liquefaction of the gels. Furthermore, the influence of the printing process and direction on the mechanical properties of the hydrogel was investigated and compared with the corresponding bulk materials obtained from a mold. No significant effects from the additive manufacturing process were observed due to a homogeneously adhesion and merging between sequentially deposited layers. In the future, further studies on the specific performance differences among hydrogels fabricated at different printing directions/speeds would be of great interest to the community, as this allows for a more accurately control and better predict of the printed structures.
This newly developed hybrid IPN hydrogel is expected to expand the material toolbox available for hydrogel-based 3D printing, and may be interesting for a wide range of applications including tissue engineering, drug delivery, soft robotics, and additive manufacturing in general. However, in this case, the low toxicity from the monomer DMAA and other small molecules residuals in the polymerized hydrogels made this hybrid hydrogel not ideal for bioprinting in the field of biofabrication. For this problem, cyto-/biocompatible monomers such as polyethylene glycol diacrylate (PEGDA) can be used as an alternative, while the overall properties of the hydrogels including mechanical properties should be re-evaluated accordingly. Moreover, the swelling behavior of the hydrogels should also be taken into account, as it may most likely affect the mechanical strength and geometry size of the printed scaffold, but is often be overlooked after printing. For example, regarding the specific hybrid hydrogel POx-b-POzi/PDMAA/clay in this work, an equilibrium swelling ratio of 1100% was determined. The printed hydrogel cuboid experienced a volume increasing over 6-fold after equilibrium swelling in water, and became mechanical fragile due to the formation of a swollen hydrogel network absorbing large amount of water.
POx-b-POzi/Alg/clay Hybrid Hydrogel
In the final part of this dissertation, to enable the cell-loaded bioprinting and long-term cell culture, the third hybrid hydrogel system POx-b-POzi/Alg/clay was introduced by replacing the monomer DMAA to the natural polysaccharides alginate. Initially, detailed rheological characterization and mechanical tests were performed to evaluate their printability and mechanically properties. Subsequently, some simple patterns were printed with the optimized hydrogel precursor solutions for the preliminary filament fusion and collapse test before proceeding to more complex printings. The fibers showed a sufficient stability which allows the creation of large structures with a height of a few centimeters and a suspended filament up to centimeter. Accordingly, various 3D constructs including suspended filaments were printed successfully with high stackability and shape fidelity. The structure after extrusion was physical crosslinked easily by soaking in CaCl2 solution and, thereafter exhibited a good mechanical flexibility and long-term stability. Interestingly, the mechanical strength and geometry size of the generated scaffolds were well maintained over a culture period of weeks in water, which is of great importance for clinical applications. In addition, the post-printing ionic crosslinking of alginate could also be realized by other di/trivalent cations such as Fe3+ and Tb3+.
Subsequently, the cell-laden printing with this hybrid hydrogel and post-printing crosslinking by Ca2+ ions highlighting its feasibility for 3D bioprinting. WST-1 assay of fibroblast suggested no-dose dependent cytocompatibility of the hydrogel precursor solution. The cell distribution was uniform throughout the printed construct, and proliferated with high cell viability during the 21 days culture. The presented hybrid approach, utilizing the beneficial properties of the POx-b-POzi base material, could be interesting for a wide range of bioprinting applications and potentially enabling also other biological bioinks such as collagen, hyaluronic acid, decellularized extracellular matrix or cellulose based bioinks. Although the results look promising and the developed hydrogel is an important bioink candidate, the long-term in vitro cell studies with different cell lines and clinical model establishment are still under investigation, which remains a long road but is of great importance before realizing real clinical application.
Last but not least, the improvement to the printability of thermogelling POx/POzi-based copolymers by the clay Laponite XLG was also demonstrated in another thermogelling copolymer PEtOx-b-PnPrOzi. This suggests that the addition of clay may be a general strategy to improve the printability of such polymers. Despite these advances in this work which significantly extended the (bio)material platform of additive manufacturing technology, the competition is still fierce and more work should be done in the further to reveal the potential and limitations of this kind of new and promising candidate (bio)ink materials. It is also highly expected for further creative works based on the thermogelling POx/POzi polymers, such as crosslinking in Ca2+ solution containing monomer acrylamide to prepare printable and mechanically tough hydrogels, research on POx-based support bath material, and print of clinically more relevant sophisticated structures such as 3D microvascular networks omnidirectionally.
Dielektrische Elastomersensoren sind aus Elastomermaterialien aufgebaute Sensoren mit einem kapazitiven Messprinzip. In ihrer einfachsten Form bestehen sie aus einer dehnbaren Elastomerfolie als Dielektrikum, die beidseitig mit leitfähigen und ebenfalls dehnbaren Schichten als Elektroden bedeckt ist.
Damit entsteht ein mechanisch verformbarer elektrischer Kondensator, dessen Kapazität mit der Dehnung der Elastomerfolie stetig ansteigt. Neben solchen Dehnungssensoren lassen sich mit einem geeigneten geometrischen Aufbau auch dielektrische Elastomersensoren realisieren, bei denen eine elektrische Kapazität mit einem angelegten Druck bzw. einer Kraft auf die Oberfläche, mit einer Scherkraft oder mit der Annäherung eines elektrisch leitfähigen oder polarisierbaren Körpers wie z. B. der menschlichen Hand messbar ansteigt.
Durch ihre vielfältige Funktion, intrinsische Verformbarkeit und flächige Ausgestaltung weisen Dielektrische Elastomersensoren erhebliches Potential in der Schaffung smarter, sensitiver Oberflächen auf. Dabei sind weitgehende und individuelle Adaptionen auf den jeweiligen Anwendungszweck durch Abstimmung geometrischer, mechanischer und elektrischer Eigenschaften möglich. Die bisherige Forschung beschränkt sich jedoch auf die Analyse und Optimierung einzelner Aspekte ohne das Potential einer übergreifenden systemischen Perspektive zu nutzen.
Diese Arbeit widmet sich daher der Betrachtung der Sensorik als Gesamtsystem, sowohl horizontal - von abstrakten Modellen bis zur Fertigung und prototypischen Anwendung - als auch vertikal über die Komponenten Material, Struktur und Elektronik.
Hierbei wurden in mehreren Teilgebieten eigenständige neue Erkenntnisse und Verbesserungen erzielt, die anschließend in die übergreifende Betrachtung des Gesamtsystems integriert wurden. So wurden in den theoretischen Vorarbeiten neue Konzepte zur ortsaufgelösten Erfassung mehrerer physikalischer Größen und zur elektrischen und mechanischen Modellierung entwickelt. Die abgeleiteten Materialanforderungen wurden in eine tiefgehende Charakterisierung der verwendeten Elastomer-Kompositwerkstoffe überführt, in der neuartige analytische Methoden in Form von dynamischer elektromechanischer Testung und nanoskaliger Computertomographie zur Aufklärung der inneren Wechselwirkungen zum Einsatz kamen.
Im Bereich der automatisierten Prozessierung wurde ein für die komplexen mehrschichtigen Elektrodenstrukturen geeigneter neuer lasergestützer substraktiver Fertigungprozess etabliert, der zudem die Brücke zu elastischer Elektronik schlägt.
In der abschließenden Anwendungsevaluierung wurden mehrere ortsaufgelöste und multimodale Gesamtsysteme aufgebaut und geeignete Messelektronik und Software entwickelt. Abschließend wurden die Systeme mit einem eigens entwickelten robotischen Testsystem charakterisiert und zudem das Potential der Auswertung mittels maschinellem Lernen aufgezeigt.
Overview of the Organolead Trihalide Perovskite Crystal Area
Studies of perovskite single crystals with high crystallographic quality is an important technological area of the perovskite research, which enables to estimate their full optoelectronic potential, and thus to boost their future applications [26]. It was therefore essential to grow high-quality single crystals with lowest structural as well as chemical defect densities and with a stoichiometry relevant for their thin-film counterparts [26]. Optoelectronic devices, e.g. solar cells, are highly complex systems in which the properties of the active layer (absorber) are strongly influenced by the adjacent layers, so it is not always easy to define the targeted properties and elaborate the design rules for the active layer. Currently, organolead trihalide perovskite (OLTP) single crystals with the structure ABX3 are one of the most studied crystalline systems. These hybrid crystals are solids composed of an organic cation such as methylammonium (A = MA+) or formamidinium (A = FA+) to form a three-dimensional periodic lattice together with the lead cation (B = Pb2+) and a halogen anion such as chloride, bromide or iodide (X = Cl-, Br- or I-) [23]. Among them are methylammonium lead tribromide (MAPbBr3), methylammonium lead triiodide (MAPbI3), as well as methylammonium lead trichloride (MAPbCl3) [62, 63]. Important representatives with the larger cation FA+ are formamidinium lead tribromide (FAPbBr3) and formamidinium lead triiodide (FAPbI3) [23, 64]. Besides the exchange of cations as well as anions, it was possible to grow crystals containing two halogens to obtain mixed crystals with different proportions of chlorine to bromine and bromine to iodine, as it is shown in Figure 70. By varying the mixing ratio of the halogens, it was therefore possible to vary the colour and thus the absorption properties of the crystals [85], as it can be done with thin polycrystalline perovskite films. In addition, since a few years it is also doable to grow complex crystals that contain several cations as well as anions [26, 80, 81]. These include the perovskites double cation – double halide formamidinium lead triiodide – methylammonium lead tribromide (FAPbI3)0.9(MAPbBr3)0.1 (FAMA) [26, 80] and formamidinium lead triiodide – methylammonium lead tribromide – caesium lead tribromide (FAPbI3)0.9(MAPbBr3)0.05(CsPbBr3)0.05 (CsFAMA) [81], which have made a significant contribution to increase the power conversion efficiency (PCE) in thin-film photovoltaics [47, 79, 182]. The growth of crystals to this day is performed exclusively from solution [23, 26, 56, 62]. Important preparation methods are the cooling acid-based precursor solution crystallisation [22], the inverse temperature crystallisation (ITC) [62], and the antisolvent vapour-assistant crystallisation (AVC) [137]. In the cooling crystallisation, the precursor salts AX and PbX2 are dissolved in an aqueous halogen-containing acid at high temperatures [56]. Controlled and slow cooling finally results in a supersaturated precursor solution, which leads to spontaneous nucleation of crystal nuclei, followed by subsequent crystal growth. The ITC method is based on the inverse or retrograde solubility of a dissociated perovskite in an organic solvent [23, 64]. With increasing temperature, the solubility of the perovskite decreases and mm-sized crystals can be grown within a few hours [23]. In the AVC method, the precursors are also dissolved in an organic solvent as well [137]. By slow evaporation of a so-called antisolvent [137], the solubility of the perovskite in the now present solvent mixture decreases and it finally precipitates. In addition, there are many other methods with the goal of growing high quality and large crystals in a short period of
time [60, 61, 233, 310].
Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing.
To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work.
The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components.
In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites.
A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes.
Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components.
All systems can access each other via dynamic routes to perform service-based M2M communication.
With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way.
The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution.
The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems.
Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses.
The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions.
After examining suitable parameters for a newly designed system, dynamic SIPGP could be developed. For the first time, SIPGP was performed while applying a constant flow of monomer solution through the reaction system. This added a new parameter: the flow rate (rfl). Accordingly, this parameter was examined, comparing dynamic to static SIPGP. It could be shown, that by applying higher rfl to the system, the contact angle increases, which indicates a slower coating. The flow patterns inside the reactor were then modelled and calculated. These calculations indicated, that, due to higher flow velocities, the contact angle on the coated samples would be lower on the sides of the sample and higher in the middle. This finding was verified by contact angle measurements. The influence of dynamic SIPGP on the temperature inside the reaction chamber during the reaction was examined by temperature sensors inside the reactor. This showed, that the constant flow of monomer solution can be utilized to decrease the warming of the reaction solution during the reaction. Finally it was shown, that dynamic SIPGP can decrease the formation of bulk polymer on the sample, which is forming during the reaction. This enables SIPGP to fabricate more homogeneous coatings by applying a constant monomer flow.
Remote sensing time series is the collection or acquisition of remote sensing data in a
fixed equally spaced time period over a particular area or for the whole world. Near
daily high spatial resolution data is very much needed for remote sensing applications
such as agriculture monitoring, phenology change detection, environmental
monitoring and so on. Remote sensing applications can produce better and accurate
results if they are provided with dense and accurate time series of data. The current
remote sensing satellite architecture is still not capable of providing near daily
or daily high spatial resolution images to fulfill the needs of the above mentioned
remote sensing applications. Limitations in sensors, high development, operational
costs of satellites and presence of clouds blocking the area of observation are some
of the reasons that makes near daily or daily high spatial resolution optical remote
sensing data highly challenging to achieve. With developments in the optical sensor
systems and well planned remote sensing satellite constellations, this condition
can be improved but it comes at a cost. Even then the issue will not be completely
resolved and thus the growing need for high temporal and high spatial resolution
data cannot be fulfilled entirely. Because the data collection process relies on satellites
which are physical system, these can fail unpredictably due to various reasons
and cause a complete loss of observation for a given period of time making a gap
in the time series. Moreover, to observe the long term trend in phenology change
due to rapidly changing environmental conditions, the remote sensing data from
the present is not just sufficient, the data from the past is also important. A better
alternative solution for this issue can be the generation of remote sensing time series
by fusing data from multiple remote sensing satellite which has different spatial and
temporal resolutions. This approach will be effective and efficient. In this method
a high temporal low spatial resolution image from a satellite such as Sentinel-2 can
be fused with a low temporal and high spatial resolution image from a satellite such
as the Sentinel-3 to generate a synthetic high temporal high spatial resolution data.
Remote sensing time series generation by data fusion methods can be applied to
the satellite images captured currently as well as the images captured by the satellites
in the past. This will provide the much needed high temporal and high spatial
resolution images for remote sensing applications. This approach with its simplistic
nature is cost effective and provides the researchers the means to generate the
data needed for their application on their own from the limited source of data available
to them. An efficient data fusion approach in combination with a well planned
satellite constellation can offer a solution which will ensure near daily time series of
remote sensing data with out any gap. The aim of this research work is to develop
an efficient data fusion approaches to achieve dense remote sensing time series.
This thesis encompasses the development of the additive manufacturing technology melt electrowriting, in order to achieve the improved applicability in biomedical applications and design of scaffolds. Melt electrowriting is a process capable of producing highly resolved structures from microscale fibres. Nevertheless, there are parameters influencing the process and it has not been clear how they affect the printing result. In this thesis the influence of the processing and environmental parameters is investigated with the impact on their effect on the jet speed, fibre diameter and scaffold morphology, which has not been reported in the literature to date and significantly influences the printing quality. It was demonstrated that at higher ambient printing temperatures the fibres can be hampered to the extent that the individual fibres are completely molten together and increased air humidity intensifies this effect. It was also shown how such parameters as applied voltage, collector distance, feed pressure and polymer temperature influence the fibre diameter and critical translation speed. Based on these results, a detailed investigation of the fibre diameter control and printing of scaffolds with novel architectures was made. As an example, a 20-fold diameter ratio is obtained within one scaffold by changing the collector speed and the feed pressure during the printing process. Although the pressure change caused fibre diameter oscillations, different diameter fibres were successfully integrated into two scaffold designs, which were tested for mesenchymal stromal cell suspension and adipose tissue spheroid seeding. Further design and manufacturing aspects are discussed while jet attraction to the printed structures is illuminated in connection with the fibre positioning control of the multilayer scaffolds. The artefacts that appear with the increasing scaffold height of sinusoidal laydown patterns are counteracted by layer-by-layer path adjustment. For the prediction of a printing error of the first deposited layer, an algorithm is developed, that utilizes an empirical jet lag equation and the speed of fibre deposition. This model was able to predict the position of the printing fibre with up to ten times smaller error than the of the programmed path. The same model allows to qualitatively assess the fibre diameter change along the nonlinear pattern as well as to indicate the areas of the greatest pattern deformation with the growing scaffold height. Those results will be used in the later chapters for printing of the novel MEW structures for biomedical applications. In the final chapter the concept of multimodal scaffold was combined with the suspended fibre printing, for the manufacturing of the MEW scaffolds with controlled pore interconnectivity in three dimensions. Those scaffolds were proven to be a promising substate for the control of the neurite spreading of the chick DRG neurons.
The aim of this thesis was the preparation of a biomaterial ink for the fabrication of chemically crosslinked hydrogel scaffolds with low micron sized features using melt electrowriting (MEW). By developing a functional polymeric material based on 2-alkyl-2-oxazine (Ozi) and 2-alkyl-2-oxazoline (Ox) homo- and copolymers in combination with Diels-Alder (DA)-based dynamic covalent chemistry, it was possible to achieve this goal. This marks an important step for the additive manufacturing technique melt electrowriting (MEW), as soft and hydrophilic structures become available for the first time. The use of dynamic covalent chemistry is a very elegant and efficient method for consolidating covalent crosslinking with melt processing. It was shown that the high chemical versatility of the Ox and Ozi chemistry offers great potential to control the processing parameters. The established platform offers straight forward potential for modification with biological cues and fluorescent markers. This is essential for advanced biological applications. The physical properties of the material are readily controlled and the potential for 4D-printing was highlighted as well. The developed hydrogel architectures are excellent candidates for 3D cell culture applications. In particular, the low internal strength of some of the scaffolds in combination with the tendency of such constructs to collapse into thin strings could be interesting for the cultivation of muscle or nerve cells. In this context it was also possible to show that MEW printed hydrogel scaffolds can withstand the aspiration and ejection through a cannula. This allows the application as scaffolds for the minimally invasive delivery of implants or functional tissue equivalent structures to various locations in the human body.
Miniaturized satellites on a nanosatellite scale below 10kg of total mass contribute most to the number of launched satellites into Low Earth Orbit today. This results from the potential to design, integrate and launch these space missions within months at very low costs. In the past decade, the reliability in the fields of system design, communication, and attitude control have matured to allow for competitive applications in Earth observation, communication services, and science missions. The capability of orbit control is an important next step in this development, enabling operators to adjust orbits according to current mission needs and small satellite formation flight, which promotes new measurements in various fields of space science. Moreover, this ability makes missions with altitudes above the ISS comply with planned regulations regarding collision avoidance maneuvering.
This dissertation presents the successful implementation of orbit control capabilities on the pico-satellite class for the first time. This pioneering achievement is demonstrated on the 1U CubeSat UWE–4. A focus is on the integration and operation of an electric propulsion system on miniaturized satellites. Besides limitations in size, mass, and power of a pico-satellite, the choice of a suitable electric propulsion system was driven by electromagnetic cleanliness and the use as a combined attitude and orbit control system. Moreover, the integration of the propulsion system leaves the valuable space at the outer faces of the CubeSat structure unoccupied for future use by payloads. The used NanoFEEP propulsion system consists of four thruster heads, two neutralizers and two Power Processing Units (PPUs).
The thrusters can be used continuously for 50 minutes per orbit after the liquefaction of the propellant by dedicated heaters. The power consumption of a PPU with one activated thruster, its heater and a neutralizer at emitter current levels of 30-60μA or thrust levels of 2.6-5.5μN, respectively, is in the range of 430-1050mW. Two thruster heads were activated within the scope of in-orbit experiments. The thrust direction was determined using a novel algorithm within 15.7° and 13.2° of the mounting direction. Despite limited controllability of the remaining thrusters, thrust vector pointing was achieved using the magnetic actuators of the Attitude and Orbit Control System.
In mid 2020, several orbit control maneuvers changed the altitude of UWE–4, a first for pico-satellites. During the orbit lowering scenario with a duration of ten days, a single thruster head was activated in 78 orbits for 5:40 minutes per orbit. This resulted in a reduction of the orbit altitude by about 98.3m and applied a Delta v of 5.4cm/s to UWE–4. The same thruster was activated in another experiment during 44 orbits within five days for an average duration of 7:00 minutes per orbit. The altitude of UWE–4 was increased by about 81.2m and a Delta v of 4.4cm/s was applied. Additionally, a collision avoidance maneuver was executed in July 2020, which increased the distance of closest approach to the object by more than 5000m.
Nanoelectronics is an essential technology for down-scaling beyond the limit of silicon-based electronics. Single-Wall Carbon Nanotubes (SWNT) are semiconducting components that exhibit a large variety of properties that make them usable for sensing, telecommunication, or computational tasks. Due to their high surface to volume ratio, carbon nanotubes are strongly affected by molecular adsorptions, and almost all properties depend on surface adsorption. SWNT with smaller diameters (0.7-0.9nm) show a stronger sensitivity to surface effects. An optimized synthesis route was developed to produce these nanotubes directly. They were produced with a clean surface, high quality, and large lengths of 2 μ m. The results complement previous studies on larger diameters (0.9-1.4nm). They allow performing statistically significant assumptions for a perfect nanotube, which is selected from a subset of nanotubes with good emission intensity, and high mechanical durability. The adsorption of molecules on the surface of carbon nanotubes influences the motion and binding strength of chargeseparated states in this system. To gain insight into the adsorption processes on the surface with a minimum of concurrent overlapping effects, a microscopic setup, and a measurement technique were developed. The system was estimated to exhibit excellent properties like long exciton diffusion lengths (>350nm), and big exciton sizes (8.5(5)nm), which was substantiated by a simulation. We studied the adsorption processes at the surface of Single-Wall Carbon Nanotubes for molecules in the gas phase, solvent molecules, and surfactant molecules. The experiments were all carried out on suspended individualized carbon nanotubes on a silicon wafer substrate. The experiments in the gas-phase showed that the excitonic emission energy and intensity experiences a rapid blue shift during observation. This shift was associated with the spontaneous desorption of large clusters of gaseous molecules caused by laser heat up. The measurement of this desorption was essential for creating a reference to an initially clean surface and allows us to perform a comparison with previous measurements on this topic. Furthermore, the adsorption of hydrogen on the nanotube surface at high temperatures was investigated. It was found that a new emission mode arises slightly red-shifted to the excitonic emission in these systems. The new signal is almost equally strong as the main excitonic peak and was associated with the brightening of dark excitons at sp3-defects through a K-phonon assisted pathway. The finding is useful for the direct synthesis of spintronic devices as these systems are known to act as single-photon emitters. The suspended nanotubes were further studied to estimate the effect of solvent adsorption on the excitonic states during nanotube dispersion for each nanotube individually. A significant quantum yield loss is observable for hexane and acetonitrile, while the emission intensity was found to be the strongest in toluene. The reference to a clean surface allowed us to estimate the exact influence of the dielectric environment of adsorbing solvents on the excitonic emission energy. Solvent adsorption was found to lead to an energy shift that is almost twice as high as suggested in previous studies. The amount of this energy shift, however, was comparably similar for all solvents, which suggests that the influence of the distinct dielectric constant in the outer environment less significantly influences the energy shift than previously thought. An interesting phenomenon was found when using acetonitrile as a solvent, which leads to greatly enhanced emission properties. The emission is more than twice as high as in the same air-suspended nanotubes, which suggests a process that depends on the laser intensity. In this study, it was reasonably explained how an energy down-conversion is possible through the coupling of the excitonic states with solvent vibrations. The strength of this coupling, however, also suggests adsorptions to the inside of the tubular nanotube structure leading to a coupled vibration of linear acetonitrile molecules that are adsorbed to the inner surface. The findings are important for the field of nanofluidics and provide an excellent system for efficient energy down-conversion in the transmission window of biological tissue. Having separated the pure effect of solvent adsorption allowed us to study the undisturbed molecular adsorption of polymers in these systems. The addition of polyfluorene polymer leads to a slow but stepwise intensity increase. The intensity increase is overlapping with a concurrent process that leads to an intensity decrease. Unfortunately, observing the stepwise process has a low spacial resolution of only 100-250nm, which is in the range of the exciton diffusion length in these systems and hinders detailed analysis. The two competing and overlapping processes processes are considered to originate from slow π-stacking and fast side-chain binding. Insights into this process are essential for selecting suitably formed polymers. However, the findings also emphasize the importance of solvent selection during nanotube dispersion since solvent effects were proven to be far more critical on the quantum yield in these systems. These measurements can shed light on the ongoing debate on polymers adsorption during nanotube individualization and allow us to direct the discussion more towards the selection of suitable solvents. This work provides fundamental insights into the adsorption of various molecules on the surface of individually observed suspended Single-Wall Carbon Nanotubes. It allows observing the adsorption of individual molecules below the optical limit in the solid, liquid, and gas phases. Nanotubes are able to act as sensing material for detecting changes in their direct surrounding. These fundamental findings are also crucial for increasing the quantum yield of solvent-dispersed nanotubes. They can provide better light-harvesting systems for microscopy in biological tissue and set the base for a more efficient telecommunication infrastructure with nano-scale spintronics devices and lasing components. The newly discovered solvent alignment in the nanotube surrounding can potentially also be used for supercapacitors that are needed for caching the calculation results in computational devices that use polymer wrapped nanotubes as transistors. Although fundamental, these studies develop a strategy to enlighten this room that is barely only visible at the bottom of the nano-scale.
Im Rahmen dieser Arbeit wurden die elektronischen Eigenschaften von Graphen auf Metalloberflächen mittels Rastertunnelmikroskopie und Quasiteilcheninterferenz (englisch quasiparticle interference, QPI)-Messungen untersucht. Durch das Verwenden schwerer Substrate sollte die Spin-Bahn-Wechselwirkung des Graphen verstärkt werden und damit eine Bandlücke am K-Punkt der Bandstruktur mittels QPI beobachtet werden. Um das Messen von QPI auf Graphen zu testen, wurde auf der Oberfläche eines SiC(0001)-Kristalls durch Erhitzen Graphen erzeugt und mit dem Rastertunnelmikroskop untersucht. Dieses System wurde schon ausführlich in der Literatur beschrieben und bereits bekannte QPI-Messungen von Streuringen, die auf den Dirac-Kegeln des Graphen am K-Punkt basieren, konnte ich auf gr/SiC(0001) in guter Qualität erfolgreich reproduzieren. Anschließend wurde Graphen nach einem wohlbekannten Verfahren durch Aufbringen von Ethylen auf ein erhitztes Ir(111)-Substrat erzeugt. Dieses gr/Ir(111)-System diente auch als Grundlage für Interkalationsversuche von Bismut (gr/Bi/Ir(111)) und Gadolinium (gr/Gd/Ir(111)) zwischen das Graphen und das Substrat. Auf gr/Bi/Ir(111) wurde ein schon aus der Literatur bekanntes Netzwerk aus Versetzungslinien beobachtet, dem zusätzlich eine Temperaturabhängigkeit nachgewiesen werden konnte. Beim Versuch, Gadolinium zu interkalieren, wurden zwei verschieden Oberflächenstrukturen beobachtet, die auf eine unterschiedlich Anordnung bzw. Menge des interkalierten Gadoliniums zurückzuführen sein könnten. Auf keinem dieser drei Systeme konnten allerdings Streuringe mittels QPI beobachtet werden. Als Vorbereitung der Interkalation von Gadolinium wurden dessen Wachstum und magnetische Eigenschaften auf einem W(110)-Kristall untersucht. Dabei konnte eine aus der Literatur bekannte temperaturabhängige Austauschaufspaltung reproduziert werden. Darüber hinaus konnten sechs verschieden magnetische Domänen beobachtet werden. Zusätzlich sind auf der Oberfläche magnetische Streifen auszumachen, die möglicherweise auf einer Spinspirale basieren. Als Grundlage für die mögliche zukünftige Erzeugung Graphen-artiger Molekülgitter wurde das Wachstum von H-TBTQ und Me-TBTQ auf Ag(111) untersucht. Die Moleküle richten sich dabei nach der Oberflächenstruktur des Silber aus und bilden längliche Inseln, deren Kanten in drei Vorzugsrichtungen verlaufen. Auf H-TBTQ wurde zudem eine zweite, Windmühlen-artige Ausrichtung der Moleküle auf der Oberfläche beobachtet. Auf den mit den Molekülen bedeckten Stellen der Oberfläche wurde eine Verschiebung des Ag-Oberflächenzustands beobachtet, die mit einem Ladungstransfer vom Ag(111)-Substrat auf die TBTQ-Moleküle zu erklären sein könnte.
In der vorliegenden Arbeit werden die strukturellen und magnetischen Eigenschaften verschiedener 3d-Übergangsmetalloxidketten (TMO-Ketten) auf Ir(001) und Pt(001) untersucht. Diese weisen eine (3 × 1) Struktur mit periodisch angeordneten Ketten auf, die nur über die Sauerstoffbindung an das Substrat gekoppelt sind. Während die Struktur durch experimentelle und theoretische Untersuchungen bestätigt ist, liegen für die magnetischen Eigenschaften ausschließlich Rechnungen vor. Zur Überprüfung dieser theoretischen Vorhersagen wird die Methode der spinpolarisierten Rastertunnelmikroskopie (SP-STM) verwendet, die die Abbildung der magnetischen Ordnung mit atomarer Auflösung erlaubt.
Die Untersuchungen beginnen mit der Vorstellung der Ir(001) Oberfläche, die eine (5 × 1) Rekonstruktion aufweist. Eine Aufhebung dieser Rekonstruktion erreicht man durch das Heizen des Ir-Substrats in Sauerstoffatmosphäre unter Bildung einer (2 × 1) Sauerstoffrekonstruktion. Die Qualität der Oberfläche hängt dabei von der Wachstumstemperatur T und dem verwendeten Sauerstoffdruck pOx ab. Die bei T = 550°C und pOx = 1 × 10^−8 mbar hergestellte Sauerstoffrektonstruktion dient als Ausgangspunkt für die folgenden Präparationen von CoO2, FeO2 und MnO2-Ketten. Dazu wird jeweils eine drittel Monolage (ML) des Übergangsmetalls auf die Oberfläche des Substrates gedampft und die Probe unter Sauerstoffatmosphäre ein weiteres Mal geheizt. Auf diese Weise kann die (3 × 1) Struktur der bekannten Ketten bestätigt und die Gruppe der TMO-Ketten um die CrO2-Ketten erweitert werden.
In der einschlägigen Fachliteratur wurden Vorhersagen bezüglich der magnetischen Struktur der TMO-Ketten publiziert, wonach entlang und zwischen CoO2-Ketten eine ferromagnetische (FM) und für FeO2 und MnO2-Ketten eine antiferromagnetische (AFM-) Kopplung vorliegt.Während die Überprüfung dieser Vorhersagen mit SP-STM für CoO2 und CrO2-Ketten keine Hinweise auf magnetische Strukturen liefert, liegen bei FeO2 und MnO2-Ketten unterschiedliche magnetische Phasen vor. In der Tat kann
mit den experimentell gefundenen Einheitszellen die AFM-Kopplung entlang beider Ketten bestätigt werden. Im Gegensatz widersprechen die Kopplungen zwischen den Ketten den Berechnungen. Bei FeO2-Ketten liegt eine stabile FM Ordnung vor, die zu einer magnetischen (3 × 2) Einheitszelle mit einer leichten Magnetisierung in Richtung der Oberflächennormalen führt (out-of-plane). Die MnO2-Ketten weichen ebenfalls von der berechneten magnetischen kollinearen Ordnung zwischen benachbarten Ketten ab und zeigen eine chirale Struktur. Durch die Rotation der Mn-Spins um 120° in der Probenebenen (in-plane) entsteht eine magnetische (9 × 2) Einheitszelle, deren Periode durch neue DFT-Rechnungen bestätigt wird. Nach diesen Berechnungen handelt es sich um eine Spinspirale, die durch die Dzyaloshinskii-Moriya (DM-) Wechselwirkung bei einem Energiegewinn von 0,3 meV pro Mn-Atom gegenüber den kollinearen FM Zustand stabilisiert wird. Diese wird ähnlich wie bei bereits publizierten Clustern und Adatomen auf Pt(111) durch die Rudermann-Kittel-Kasuya-Yosida (RKKY-) Wechselwirkung vermittelt und erklärt den experimentell gefundenen einheitlichen Drehsinn der Spiralen.
Die RKKY-Wechselwirkung zeigt eine starke Abhängigkeit von der Fermi-Oberfläche des Substrats. Im folgenden Kapitel werden deshalb mit TMO-Ketten auf Pt(001) die strukturellen und magnetischen Eigenschaften auf einem weiteren Substrat analysiert, wobei zum Zeitpunkt der Arbeit nur die Existenz der CoO2-Ketten aus der Literatur bekannt war. Vergleichbar mit Ir(001) besitzt auch Pt(001) eine rekonstruierte Oberfläche, die sich aber stabil gegenüber Oxidation zeigt. Dadurch muss die drittel ML des Übergangsmetalls direkt auf die Rekonstruktion aufgedampft werden. Das Wachstum des Übergangsmetalls ist dabei von der Temperatur des Substrats abhängig und beeinflusst
das Ergebnis der nachfolgenden Oxidation. Diese erfolgt analog zum Wachstum der Ketten auf Ir(001) durch das Heizen der Probe in Sauerstoffatmosphäre und resultiert nur für das Aufdampfen des Übergangsmetalls auf kalte Pt(001) Oberflächen in Ketten mit der Periode von 3aPt. Auf diese Weise kann nicht nur die (3 × 1) Struktur der CoO2-Ketten bestätigt werden, sondern auch durch atomare Auflösung die Gruppe der TMO-Ketten um MnO2-Ketten auf Pt(001) erweitert werden. Im Gegensatz dazu sind die nicht magnetischen Messungen im Fall von Fe nicht eindeutig. Zwar liegen
auch hier Ketten im Abstand des dreifachen Pt Gittervektors vor, trotzdem ist die (3 × 1) Struktur nicht nachweisbar. Dies liegt an einer Korrugation mit einer Periode von 2aPt entlang der Ketten, was ein Hinweis auf eine Peierls Instabilität sein kann.
Entsprechend dem Vorgehen für Ir(001) werden für die TMO-Ketten auf Pt(001) SP-STM Messungen durchgeführt und die Vorhersage einer AFM-Kopplung für CoO2-Ketten überprüft. Auch hier können, wie im Fall von CoO2-Ketten und im Widerspruch zur Vorhersage, für beide Polarisationsrichtungen der Spitze keine magnetischen Strukturen gefunden werden. Darüber hinaus verhalten sich die MnO2-Ketten auf Pt(001) mit ihrer chiralen magnetischen Struktur ähnlich zu denen auf Ir(001). Dies bestätigt die Annahme einer indirekten DM-Wechselwirkung, wobei durch die 72° Rotation der Mn-Spins eine längere Periode der zykloidalen Spinspirale festgestellt wird. Die Erklärung dafür liegt in der Abhängigkeit der RKKY-Wechselwirkung vom Fermi-Wellenvektor des Substrats, während sich die DM-Wechselwirkung beim Übergang von Ir zu Pt nur wenig ändert.
This publication is dedicated to investigate strong light-matter coupling with excitons in 2D materials. This work starts with an introduction to the fundamentals of excitons in 2D materials, microcavities and strong coupling in chapter 2. The experimental methods used in this work are explained in detail in chapter 3. Chapter 4 covers basic investigations that help to select appropriate materials and cavities for the following experiments. In chapter 5, results on the formation of exciton-polaritons in various materials and cavity designs are presented. Chapter 6 covers studies on the spin-valley properties of exciton-polaritons including effects such as valley polarization, valley coherence and valley-dependent polariton propagation. Finally, the formation of hybrid-polaritons and their condensation are presented in chapter 7.
Für die Dosimetrie in der Strahlentherapie sind eine Reihe von Detektoren unterschiedlicher Bauform und Funktionsweise erhältlich. Detektoreigenschaften wie die Größe des aktiven Volumens,
energieabhängiges Ansprechen und Feldstörungen durch Bauteile beeinflussen ihr Signal, so dass kein idealer, universell einsetzbarer Detektor existiert. Insbesondere unter Messbedingungen, bei denen sich die Teilchenfluenz am Ort der Messung stark ändert, können die Detektorsignale stark von den wahren Dosisverhältnissen abweichen, z.B. in kleinen Feldern. Im Rahmen dieser Arbeit wurde das Ansprechen verschiedener Detektortypen in solchen Extremsituationen analysiert. Dioden und Ionisationskammern verschiedener Bauformen und Größen wurden in verschiedenen Experimenten gegen Gafchromic-EBT3-Film verglichen.
Das Ansprechen auf Streustrahlung konnte durch Ausblockung der Feldmitte untersucht werden,
wobei zusätzlich geometrisch der Volumeneffekt korrigiert wurde. Dabei zeigte sich teils ein starkes Überansprechen. Ferner wurde gezeigt, dass die bei der Messung von Querprofilen, also sowohl in der Feldmitte, in Bereichen starker Dosisgradienten und außerhalb des Nutzfeldes, auftretenden Abweichungen durch die Verwendung einer Detektorkombination kompensiert werden können. Somit verbessert sich auch die Übereinstimmung mit den auf Film gemessenen Profilen.
Für Ionisationskammern wurden effektive Messpunkte bestimmt, wobei die notwendigen Verschiebungen teils deutlich geringer waren als in den gängigen Dosimetrieprotokollen empfohlen. Insbesondere für kleinvolumige Ionisationskammern mit geringen Signalstärken kam es bei der Verwendung von im Bestrahlungsraum positionierten Elektrometern zu Störeinflüssen durch Streustrahlung. Diese Effekte konnten durch Reduzierung der das Elektrometer erreichenden Streustrahlung verringert werden.
Anschließend ließ sich das Ansprechen im Aufbaubereich vergleichen. Hier zeigten sich insbesondere Unterschiede zwischen den Detektortypen, aber auch zwischen den verwendeten Polaritäten der Kammerspannung. Durch die Verwendung einer Bleifolie wurde der Einfluss von
Elektronenkontamination herausgefiltert. Zusätzlich wurden das Ansprechen verschiedener Detektoren im oberflächennahen Bereich auch bei angelegten magnetischen Feldern von Feldstärken
bis zu 1,1 T untersucht.
In allen Fällen wurden Detektorgebrauchsgrenzen aufgezeigt. Die Erkenntnisse ermöglichen es, in den verschiedenen Extremsituationen geeignete Detektoren zu wählen, und eine Abschätzung der residualen Abweichungen durchzuführen. Gezeigt wurde auch, wo eine Detektorkombination die Genauigkeit verbessern kann.
Ziel dieser Arbeit ist es, die quantitative MRT in den Fokus zu rücken. In den letzten Jahren hat sich auf diesem Forschungsgebiet viel weiterentwickelt und es wurden verschiedenste Sequenzen und Methoden vorgestellt, um insbesondere Relaxationszeitparameter quantitativ in kurzer Zeit zu messen. Steady-State-Sequenzen eignen sich besonders für diese Thematik, da sie kurze Messzeiten benötigen und darüber hinaus ein relativ hohes SNR besitzen. Speziell die IR TrueFISP-Sequenz bietet für die Parameterquantifizierung viel Potential. Ursprünglich wurde diese Sequenz an der Universität Würzburg zur simultanen Messung von T1- und T2-Relaxationszeiten vorgestellt und hinsichtlich der Zeiteffizienz weiterentwickelt. In dieser Arbeit wurde ein neuartiger iterativer Rekonstruktionsansatz für die IR TrueFISP-Sequenz entwickelt, der auf einer Hauptkomponentenanalyse (PCA) basiert und sich die glatten Signalverläufe zu Nutze macht. Aufgrund der hohen Zeitauflösung dieser Rekonstruktionstechnik werden dabei auch Gewebekomponenten mit kurzen Relaxationszeiten detektierbar. Weiterhin bewahrt der Rekonstruktionsansatz Informationen mehrerer Gewebekomponenten innerhalb eines Voxels und ermöglicht damit eine relaxographische Untersuchung. Insbesondere beim Menschen führen der Partialvolumeneffekt und die Mikrostruktur des Gewebes zu Signalverläufen, die ein multi-exponentielles Signal liefern. Die MR-Relaxographie, also die Darstellung von Relaxationszeitverteilungen innerhalb eines Voxels, stellt eine Möglichkeit dar, um die beteiligten Gewebekomponenten aus dem überlagerten Signalverlauf zu extrahieren. Insgesamt bilden die optimierte Relaxometrie mit der Möglichkeit der analytischen Korrektur von Magnetfeldinhomogenitäten und die beschleunigte Relaxographie die Hauptteile dieser Dissertation. Die Hauptkapitel werden im Folgenden noch einmal gesondert zusammengefasst.
Die simultane Aufnahme der quantitativen T1- und T2-Parameter-Karten kann mit einem
Goldenen-Winkel-basiertem radialen IR TrueFISP-Readout in ungefähr 7 Sekunden pro
Schicht erreicht werden. Die bisherige Rekonstruktionstechnik mit dem KWIC-Filter ist
durch dessen breite Filter-Bandbreite und somit in der zeitlichen Auflösung limitiert. Besonders bei hohen räumlichen Frequenzen wird eine sehr große Anzahl an Projektionen
zusammengefasst um ein Bild zu generieren. Dies sorgt dafür, dass Gewebekomponenten mit kurzer T1*-Relaxationszeit (z.B. Fett oder Myelin) nicht akkurat aufgelöst werden können. Um dieses Problem zu umgehen, wurde die T1* shuffling-Rekonstruktion entwickelt, die auf dem T2 Shuffling-Ansatz basiert. Diese Rekonstruktionstechnik macht sich die glatten Signalverläufe der IR TrueFISP-Sequenz zu Nutze und ermöglicht die Anwendung einer PCA. Die iterative Rekonstruktion sorgt dafür, dass mit nur acht kombinierten Projektionen pro generiertem Bild eine merklich verbesserte temporäre Auflösung erzielt werden kann. Ein Nachteil ist jedoch das stärkere Rauschen in den ersten Bildern der Zeitserie bedingt durch die angewandte PCA. Dieses verstärkte Rauschen äußert sich in den leicht erhöhten Standardabweichungen in den berechneten Parameter-Karten. Jedoch ist der Mittelwert näher an den Referenzwerten im Vergleich zu den Ergebnissen mit dem KWIC-Filter. Letztendlich kann man sagen, dass die Ergebnisse leicht verrauschter, aber exakter sind.
Mittels zusätzlichen Regularisierungstechniken oder Vorwissen bezüglich des Rauschlevels
wäre es zudem noch möglich, das SNR der ersten Bilder zu verbessern, um dadurch den
beschriebenen Effekt zu verringern.
Grundsätzlich hängt die Genauigkeit von IR TrueFISP vom T1/T2-Verhältnis des betreffenden
Gewebes und dem gewählten Flipwinkel ab. In dieser Arbeit wurde der Flipwinkel besonders für weiße und graue Masse im menschlichen Gehirn optimiert. Mit den verwendeten 35° wurde er außerdem etwas kleiner gewählt, um zudem Magnetisierungstransfereffekte zu minimieren. Mit diesen Einstellungen ist die Präzision vor allem für hohe T1- und niedrige T2-Werte sehr gut, wird jedoch insbesondere für höhere T2-Werte schlechter. Dies ist aber ein generelles Problem der IR TrueFISP-Sequenz und hängt nicht mit der entwickelten Rekonstruktionsmethode zusammen. Außerdem wurde im fünften Kapitel eine
Akquisitionstechnik vorgestellt, die eine 3D-Abdeckung der quantitativen Messungen des Gehirns in klinisch akzeptabler Zeit von unter 10 Minuten erzielt. Dies wird durch Einsatz der parallelen Bildgebung erreicht, da eine Kombination aus radialer Abtastung in der Schicht und kartesischer Aufnahme in Schichtrichtung (Stack-of-Stars) vorliegt.
Ein großes Problem in der Steady-State-Sequenz (und somit auch bei IR TrueFISP) sind
Magnetfeldinhomogenitäten, die durch Suszeptibilitätsunterschiede verschiedener Gewebe und/oder Inhomogenitäten des Hauptmagnetfeldes hervorgerufen werden. Diese führen zu Signalauslöschungen und damit verbunden zu den beschriebenen Banding-Artefakten. Mithilfe der analytisch ermittelten Korrekturformeln ist es nun möglich, die berechneten (T1,T2)-Wertepaare unter Berücksichtigung der tatsächlich auftretenden Off- Resonanzfrequenz für einen großen Bereich zu korrigieren. An den kritischen Stellen, an denen die Bandings auftreten, liefert jedoch auch diese Korrektur keine brauchbaren Ergebnisse. Grundsätzlich ist es für die Genauigkeit der Ergebnisse stets zu empfehlen, die Flipwinkel- und B0-Karte zusätzlich mit aufzunehmen, um diese Parameter für die quantitative Auswertung exakt zu kennen. Mit den beschriebenen Methoden aus Kapitel 6 könnte es prinzipiell auch möglich sein, die Off-Resonanzfrequenz aus dem Signalverlauf zu ermitteln und auf die zusätzliche Messung der B0-Karte zu verzichten. B0-Änderungen während der Messung, die von der Erwärmung der passiven Shim-Elemente im MR-System hervorgerufen werden, sind kaum zu korrigieren. Ein stabiler Scanner ohne B0-Drift ist deshalb für quantitative Auswertungen erforderlich.
Die erwähnte Messzeit von 7 Sekunden pro Schicht garantiert, dass auch Gewebe mit längeren Relaxationskomponenten annähernd im Steady-State sind, was wiederum für das Umkehren des Signals in den abklingenden Verlauf gegen Null und die anschließende
Multikomponentenanalyse (vgl. Kapitel 7) notwendig ist. Mit der inversen Laplace-
Transformation ist es innerhalb eines Voxels möglich, Signalverläufe auf mehrere Komponenten hin zu untersuchen. Der ursprünglich angenommene mono-exponentielle Verlauf wird durch ein multi-exponentielles Verhalten abgelöst, was vor allem in biologischem Gewebe eher der Wahrheit entspricht. Gewebe mit kurzen Relaxationskomponenten (T1* < 200 ms) sind klinisch relevant und mit T1* shuffling detektierbar. Vor allem Myelin innerhalb des Gehirns ist bei neurologischen Fragestellungen ein Indikator zur Diagnose im Frühstadium (z.B. für neurodegenerative Erkrankungen) und
deshalb von besonderem Interesse. Die Integration über verschiedene T1*-Zeitbereiche im
T1*-Spektrum ermöglicht dazu die Erstellung von Gewebekomponenten-Karten, mithilfe
derer klinische Auswertungen sinnvoll wären. Die Erstellung dieser Karten ist prinzipiell
möglich und funktioniert für mittlere und lange Gewebekomponenten recht gut. Die
klinisch relevanten kurzen Gewebekomponenten sind dagegen bei der radialen Aufnahme
mit nur einem Schuss noch nicht befriedigend. Deshalb wurde die Aufnahmetechnik in
eine quasi-zufällige kartesische Akquisition mit mehreren Schüssen weiterentwickelt. Die Ergebnisse wurden in Kapitel 7 vorgestellt und sind vielversprechend. Einzig die Messzeit sollte mit zusätzlichen Beschleunigungen noch weiter verkürzt und auf eine kartesische 3D-Akquisition erweitert werden.
Die Beschränkung auf T1*-Spektren bei der Multikomponentenanalyse und die Tatsache, dass deren Amplitude von einer Kombination von S0 und Sstst abhängen, führen dazu, dass es nicht ohne Weiteres möglich ist für einen einzelnen Gewebetyp an die T1- und T2-Information zu gelangen. In Kapitel 8 wurde gezeigt, dass dies mit einer zusätzlichen
Messung gelingen kann. Das finale Ergebnis dieser Messungen ohne und mit Inversion sind zweidimensionale Spektren, bei der für jede Gewebekomponente innerhalb eines Voxels der T1- und T2-Wert abgelesen werden kann. Wichtig hierbei ist die Tatsache, dass der verwendete Ansatz kein Vorwissen über die Anzahl der zu erwartenden Gewebekomponenten (Peaks) im Voxel voraussetzt. Auch bei dieser Methodik ist die Kenntnis über den tatsächlichen Flipwinkel von Bedeutung, da dieser in den Formeln zur Berechnung von T1 und T2 verwendet wird. Die Stabilität des B0-Feldes ist hier ebenso von enormer Bedeutung, da Änderungen zwischen den beiden Messungen zu einem unterschiedlichen Steady-State und somit zu Abweichungen bei den nachfolgenden Berechnungen führen, die auf den selben Steady-State-Wert ausgelegt sind.
Zusammenfassend lässt sich sagen, dass mit dieser Arbeit die Grundlagen für genauere
und robustere quantitative Messungen mittels Steady-State-Sequenzen gelegt wurden. Es
wurde gezeigt, dass sich Relaxationszeitspektren für jedes einzelne Voxel generieren lassen.
Dadurch ist eine verbesserte Auswertung möglich, um genauere Aussagen über die Zusammensetzung einer Probe (vor allem beim menschlichen Gewebe) treffen zu können. Zudem wurde die Theorie für ultraschnelle 2D-Relaxographie-Messungen vorgestellt. Erste”Proof of Principle“-Experimente zeigen, dass es möglich ist, 2D-Relaxationszeitspektren in sehr kurzer Zeit zu messen und graphisch darzustellen. Diese Aufnahme- und Datenverarbeitungstechnik ist in dieser Form einmalig und in der Literatur kann bis dato keine schnellere Methode gefunden werden.
Motivated by the great potential which is offered by the combination of additive manufacturing and tissue engineering, a novel polymeric bioink platform based on poly(2 oxazoline)s was developed which might help to further advance the young and upcoming field of biofabrication. In the present thesis, the synthesis as well as the characteristics of several diblock copolymers consisting of POx and POzi have been investigated with a special focus on their suitability as bioinks.
In general, the copolymerization of 2-oxazolines and 2-oxazines bearing different alkyl side chains was demonstrated to yield polymers in good agreement with the degree of polymerization aimed for and moderate to low dispersities.
For every diblock copolymer synthesized during the present study, a more or less pronounced dependency of the dynamic viscosity on temperature could be demonstrated. Diblock copolymers comprising a hydrophilic PMeOx block and a thermoresponsive PnPrOzi block showed temperature induced gelation above a degree of polymerization of 50 and a polymer concentration of 20 wt%. Such a behavior has never been described before for copolymers solely consisting of poly(cyclic imino ether)s.
Physically cross linked hydrogels based on POx b POzi copolymers exhibit reverse thermal gelation properties like described for solutions of PNiPAAm and Pluronic F127. However, by applying SANS, DLS, and SLS it could be demonstrated that the underlying gel formation mechanism is different for POx b POzi based hydrogels. It appears that polymersomes with low polydispersity are formed already at very low polymer concentrations of 6 mg/L. Increasing the polymer concentration resulted in the formation of a bicontinuous sponge like structure which might be formed due to the merger of several vesicles. For longer polymer chains a phase transition into a gyroid structure was postulated and corresponds well with the observed rheological data.
Stable hydrogels with an unusually high mechanical strength (G’ ~ 4 kPa) have been formed above TGel which could be adjusted over a range of 20 °C by changing the degree of polymerization if maintaining the symmetric polymer architecture. Variations of the chain ends revealed only a minor influence on TGel whereas the influence of the solvent should not be neglected as shown by a comparison of cell culture medium and MilliQ water.
Rotationally as well as oscillatory rheological measurements revealed a high suitability for printing as POx b POzi based hydrogels exhibit strong shear thinning behavior in combination with outstanding recovery properties after high shear stress.
Cell viability assays (WST-1) of PMeOx b PnPrOzi copolymers against NIH 3T3 fibroblasts and HaCat cells indicated that the polymers were well tolerated by the cells as no dose-dependent cytotoxicity could be observed after 24 h at non-gelling concentrations up to 100 g/L.
In summary, copolymers consisting of POx and POzi significantly increased the accessible range of properties of POx based materials. In particular thermogelation of aqueous solutions of diblock copolymers comprising PMeOx and PnPrOzi was never described before for any copolymer consisting solely of POx or POzi. In combination with other characteristics, e.g. very good cytocompatibility at high polymer concentrations and comparably high mechanical strength, the formed hydrogels could be successfully used for 3D bioprinting. Although the results appear promising and the developed hydrogel is a serious bioink candidate, competition is tough and it remains an open question which system or systems will be used in the future.
The attitude and orbit control system of pico- and nano-satellites to date is one of the bottle necks for future scientific and commercial applications. A performance increase while keeping with the satellites’ restrictions will enable new space missions especially for the smallest of the CubeSat classes. This work addresses methods to measure and improve the satellite’s attitude pointing and orbit control performance based on advanced sensor data analysis and optimized on-board software concepts. These methods are applied to spaceborne satellites and future CubeSat missions to demonstrate their validity. An in-orbit calibration procedure for a typical CubeSat attitude sensor suite is developed and applied to the UWE-3 satellite in space. Subsequently, a method to estimate the attitude determination accuracy without the help of an external reference sensor is developed. Using this method, it is shown that the UWE-3 satellite achieves an in-orbit attitude determination accuracy of about 2°.
An advanced data analysis of the attitude motion of a miniature satellite is used in order to estimate the main attitude disturbance torque in orbit. It is shown, that the magnetic disturbance is by far the most significant contribution for miniature satellites and a method to estimate the residual magnetic dipole moment of a satellite is developed. Its application to three CubeSats currently in orbit reveals that magnetic disturbances are a common issue for this class of satellites. The dipole moments measured are between 23.1mAm² and 137.2mAm². In order to autonomously estimate and counteract this disturbance in future missions an on-board magnetic dipole estimation algorithm is developed.
The autonomous neutralization of such disturbance torques together with the simplification of attitude control for the satellite operator is the focus of a novel on-board attitude control software architecture. It incorporates disturbance torques acting on the satellite and automatically optimizes the control output. Its application is demonstrated in space on board of the UWE-3 satellite through various attitude control experiments of which the results are presented here.
The integration of a miniaturized electric propulsion system will enable CubeSats to perform orbit control and, thus, open up new application scenarios. The in-orbit characterization, however, poses the problem of precisely measuring very low thrust levels in the order of µN. A method to measure this thrust based on the attitude dynamics of the satellite is developed and evaluated in simulation. It is shown, that the demonstrator mission UWE-4 will be able to measure these thrust levels with a high accuracy of 1% for thrust levels higher than 1µN.
The orbit control capabilities of UWE-4 using its electric propulsion system are evaluated and a hybrid attitude control system making use of the satellite’s magnetorquers and the electric propulsion system is developed. It is based on the flexible attitude control architecture mentioned before and thrust vector pointing accuracies of better than 2° can be achieved. This results in a thrust delivery of more than 99% of the desired acceleration in the target direction.
It is the aim of this thesis to present a visual body weight estimation, which is suitable for medical applications. A typical scenario where the estimation of the body weight is essential, is the emergency treatment of stroke patients: In case of an ischemic stroke, the patient has to receive a body weight adapted drug, to solve a blood clot in a vessel. The accuracy of the estimated weight influences the outcome of the therapy directly. However, the treatment has to start as early as possible after the arrival at a trauma room, to provide sufficient treatment. Weighing a patient takes time, and the patient has to be moved. Furthermore, patients are often not able to communicate a value for their body weight due to their stroke symptoms. Therefore, it is state of the art that physicians guess the body weight. A patient receiving a too low dose has an increased risk that the blood clot does not dissolve and brain tissue is permanently damaged. Today, about one-third gets an insufficient dosage. In contrast to that, an overdose can cause bleedings and further complications. Physicians are aware of this issue, but a reliable alternative is missing.
The thesis presents state-of-the-art principles and devices for the measurement and estimation of body weight in the context of medical applications. While scales are common and available at a hospital, the process of weighing takes too long and can hardly be integrated into the process of stroke treatment. Sensor systems and algorithms are presented in the section for related work and provide an overview of different approaches.
The here presented system -- called Libra3D -- consists of a computer installed in a real trauma room, as well as visual sensors integrated into the ceiling. For the estimation of the body weight, the patient is on a stretcher which is placed in the field of view of the sensors. The three sensors -- two RGB-D and a thermal camera -- are calibrated intrinsically and extrinsically. Also, algorithms for sensor fusion are presented to align the data from all sensors which is the base for a reliable segmentation of the patient.
A combination of state-of-the-art image and point cloud algorithms is used to localize the patient on the stretcher. The challenges in the scenario with the patient on the bed is the dynamic environment, including other people or medical devices in the field of view.
After the successful segmentation, a set of hand-crafted features is extracted from the patient's point cloud. These features rely on geometric and statistical values and provide a robust input to a subsequent machine learning approach. The final estimation is done with a previously trained artificial neural network.
The experiment section offers different configurations of the previously extracted feature vector. Additionally, the here presented approach is compared to state-of-the-art methods; the patient's own assessment, the physician's guess, and an anthropometric estimation. Besides the patient's own estimation, Libra3D outperforms all state-of-the-art estimation methods: 95 percent of all patients are estimated with a relative error of less than 10 percent to ground truth body weight. It takes only a minimal amount of time for the measurement, and the approach can easily be integrated into the treatment of stroke patients, while physicians are not hindered.
Furthermore, the section for experiments demonstrates two additional applications: The extracted features can also be used to estimate the body weight of people standing, or even walking in front of a 3D camera. Also, it is possible to determine or classify the BMI of a subject on a stretcher. A potential application for this approach is the reduction of the radiation dose of patients being exposed to X-rays during a CT examination.
During the time of this thesis, several data sets were recorded. These data sets contain the ground truth body weight, as well as the data from the sensors. They are available for the collaboration in the field of body weight estimation for medical applications.
Almost once a week broadcasts about earthquakes, hurricanes, tsunamis, or forest fires are filling the news. While oneself feels it is hard to watch such news, it is even harder for rescue troops to enter such areas. They need some skills to get a quick overview of the devastated area and find victims. Time is ticking, since the chance for survival shrinks the longer it takes till help is available. To coordinate the teams efficiently, all information needs to be collected at the command center. Therefore, teams investigate the destroyed houses and hollow spaces for victims. Doing so, they never can be sure that the building will not fully collapse while they
are inside. Here, rescue robots are welcome helpers, as they are replaceable and make work more secure. Unfortunately, rescue robots are not usable off-the-shelf, yet.
There is no doubt, that such a robot has to fulfil essential requirements to successfully accomplish a rescue mission. Apart from the mechanical requirements it has to be able to build
a 3D map of the environment. This is essential to navigate through rough terrain and fulfil manipulation tasks (e.g. open doors). To build a map and gather environmental information, robots are equipped with multiple sensors. Since laser scanners produce precise measurements and support a wide scanning range, they are common visual sensors utilized for mapping.
Unfortunately, they produce erroneous measurements when scanning transparent (e.g. glass, transparent plastic) or specular reflective objects (e.g. mirror, shiny metal). It is understood that such objects can be everywhere and a pre-manipulation to prevent their influences is impossible. Using additional sensors also bear risks.
The problem is that these objects are occasionally visible, based on the incident angle of the laser beam, the surface, and the type of object. Hence, for transparent objects, measurements might result from the object surface or objects behind it. For specular reflective objects, measurements might result from the object surface or a mirrored object. These mirrored objects are illustrated behind the surface which is wrong. To obtain a precise map, the surfaces need to
be recognised and mapped reliably. Otherwise, the robot navigates into it and crashes. Further, points behind the surface should be identified and treated based on the object type. Points behind a transparent surface should remain as they represent real objects. In contrast, Points behind a specular reflective surface should be erased. To do so, the object type needs to be classified. Unfortunately, none of the current approaches is capable to fulfil these requirements.
Therefore, the following thesis addresses this problem to detect transparent and specular reflective objects and to identify their influences. To give the reader a start up, the first chapters
describe: the theoretical background concerning propagation of light; sensor systems applied for range measurements; mapping approaches used in this work; and the state-of-the-art concerning detection and identification of transparent and specular reflective objects. Afterwards, the Reflection-Identification-Approach, which is the core of subject thesis is presented. It describes 2D and a 3D implementation to detect and classify such objects. Both are available as ROS-nodes. In the next chapter, various experiments demonstrate the applicability and reliability of these nodes. It proves that transparent and specular reflective objects can be detected and classified. Therefore, a Pre- and Post-Filter module is required in 2D. In 3D, classification is possible solely with the Pre-Filter. This is due to the higher amount of measurements. An
example shows that an updatable mapping module allows the robot navigation to rely on refined maps. Otherwise, two individual maps are build which require a fusion afterwards. Finally, the
last chapter summarizes the results and proposes suggestions for future work.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
In order to shrink the size of semiconductor devices and improve their
efficiency at the same time, silicon-based semiconductor devices have
been engineered, until the material almost reaches its performance
limits. As the candidate to be used next in semiconducting devices,
single-wall carbon nanotubes show a great potential due to their
promise of increased device efficiency and their high charge carrier
mobilities in the nanometer size active areas. However, there are
material based problems to overcome in order to imply SWNTs in the
semiconductor devices. SWNTs tend to aggregate in bundles and it is
not trivial to obtain an electronically or chirally homogeneous SWNT
dispersion and when it is done, a homogeneous thin film needs to be
produced with a technique that is practical, easy and scalable. This
work was aimed to solve both of these problems.
In the first part of this study, six different polymers, containing
fluorene or carbazole as the rigid part and bipyridine, bithiophene or
biphenyl as the accompanying copolymer unit, were used to selectively
disperse semiconducting SWNTs. With the data obtained from
absorption and photoluminescence spectroscopy of the corresponding
dispersions, it was found out that the rigid part of the copolymer plays a
primary role in determining its dispersion efficiency and electronic
sorting ability. Within the two tested units, carbazole has a higher π
electron density. Due to increased π−π interactions, carbazole
containing copolymers have higher dispersion efficiency. However, the
electronic sorting ability of fluorene containing polymers is superior.
Chiral selection of the polymers in the dispersion is not directly
foreseeable from the selection of backbone units. At the end, obtaining a monochiral dispersion is found to be highly dependent on the used raw
material in combination to the preferred polymer.
Next, one of the best performing polymers due to high chirality
enrichment and electronic sorting ability was chosen in order to
disperse SWNTs. Thin films of varying thickness between 18 ± 5 to
755o±o5 nm were prepared using vacuum filtration wet transfer method
in order to analyze them optically and electronically.
The scalability and efficiency of the integrated thin film production
method were shown using optical, topographical and electronic
measurements. The relative photoluminescence quantum yield of the
radiative decay from the SWNT thin films was found to be constant for
the thickness scale. Constant roughness on the film surface and linearly
increasing concentration of SWNTs were also supporting the scalability
of this thin film production method. Electronic measurements on bottom
gate top contact transistors have shown an increasing charge carrier
mobility for linear and saturation regimes. This was caused by the
missing normalization of the mobility for the thickness of the active
layer. This emphasizes the importance of considering this dimension for
comparison of different field effect transistor mobilities.
This thesis will outline studies performed on the fluorescence dynamics of phenyl-benzo-
[c]-tetrazolo-cinnolium chloride (PTC) in alcoholic solutions with varying viscosity using
time-resolved fluoro-spectroscopic methods. Furthermore, the properties of femtosecond
Laguerre-Gaussian (LG) laser pulses will be investigated with respect to their temporal
and spatial features and an approach will be developed to measure and control the spatial
intensity distribution on the time scale of the pulse.
Tetrazolium salts are widely used in biological assays for their low oxidation and reduction
thresholds and spectroscopic properties. However, a neglected feature in these applications
is the advantage that detection of emitted light has over the determination of the
absorbance. To corroborate this, PTC as one of the few known fluorescent tetrazolium
salts was investigated with regard to its luminescent features. Steady-state spectroscopy
revealed how PTC can be formed by a photoreaction from 2,3,5-triphenyl-tetrazolium
chloride (TTC) and how the fluorescence quantum yield behaved in alcoholic solvents
with different viscosity. In the same array of solvents time correlated single photon counting
(TCSPC) measurements were performed and the fluorescence decay was investigated.
Global analysis of the results revealed different dynamics in the different solvents, but
although the main emission constant did change with the solvent, taking the fluorescence
quantum yield into consideration resulted in an independence of the radiative rate from
the solvent. The non-radiative rate, however, was highly solvent dependent and responsible
for the observed solvent-related changes in the fluorescence dynamics. Further studies
with the increased time resolution of femtosecond fluorescence upconversion revealed an
independence of the main emission constant from the excitation energy, however the dynamics
of the cooling processes prior to emission were prolonged for higher excitation
energy. This led to a conceivable photoreaction scheme with one emissive state with a
competing non-radiative relaxation channel, that may involve an intermediate state.
LG laser beams and their properties have seen a lot of scientific attention over the past two
decades. Also in the context of new techniques pushing the limit of technology further to
explore new phenomena, it is essential to understand the features of this beam class and
check the consistency of the findings with theoretical knowledge. The mode conversion
of a Hermite-Gaussian (HG) mode into a LG mode with the help of a spiral phase plate
(SPP) was investigated with respect to its space-time characteristics. It was found that
femtosecond LG and HG pulses of a given temporal duration share the same spectrum
and can be characterized using the same well-established methods. The mode conversion
proved to only produce the desired LG mode with its characteristic orbital angular momentum
(OAM), that is conserved after frequency doubling the pulse. Furthermore, it
was demonstrated that temporal shaping of the HG pulse does not alter the result of its
mode-conversion, as three completely different temporal pulse shapes produced the same
LG mode. Further attention was given to the sum frequency generation of fs LG beams
and dynamics of the interference of a HG and a LG pulse. It was found that if both are
chirped with inverse signs the spatial intensity distribution does rotate around the beam
axis on the time scale of the pulse. A strategy was found that would enable a measurement
of these dynamics by upconversion of the interference with a third gate pulse. The results
of which are discussed theoretically and an approach of an experimental realization had
been made. The simulated findings had only been reproduced to a limited extend due to
experimental limitations, especially the interferometric stability of the setup.
A complete simulation system is proposed that can be used as an educational tool by physicians in training basic skills of Minimally Invasive Vascular Interventions. In the first part, a surface model is developed to assemble arteries having a planar segmentation. It is based on Sweep Surfaces and can be extended to T- and Y-like bifurcations. A continuous force vector field is described, representing the interaction between the catheter and the surface. The computation time of the force field is almost unaffected when the resolution of the artery is increased.
The mechanical properties of arteries play an essential role in the study of the circulatory system dynamics, which has been becoming increasingly important in the treatment of cardiovascular diseases. In Virtual Reality Simulators, it is crucial to have a tissue model that responds in real time. In this work, the arteries are discretized by a two dimensional mesh and the nodes are connected by three kinds of linear springs. Three tissue layers (Intima, Media, Adventitia) are considered and, starting from the stretch-energy density, some of the elasticity tensor components are calculated. The physical model linearizes and homogenizes the material response, but it still contemplates the geometric nonlinearity. In general, if the arterial stretch varies by 1% or less, then the agreement between the linear and nonlinear models is trustworthy.
In the last part, the physical model of the wire proposed by Konings is improved. As a result, a simpler and more stable method is obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. The physical and the geometrical methods are merged, resulting in efficient relaxations. Tests show that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions and the hardware to assemble the simulator has a low cost.
Magnetic Particle Imaging (MPI) ist ein neuartiges tomographisches Bildgebungsverfahren,
welches in der Lage ist, dreidimensional die Verteilung von superparamagnetischen
Nanopartikeln zu detektieren. Aufgrund des direkten Nachweises
des Tracers ist MPI ein sehr schnelles und sensitives Verfahren [12] und benötigt für
eine Einordnung des Tracers (z.B. im Gewebe) eine weitere bildgebende Modalität
wie die Magnetresonanztomographie (MRI) oder die Computertomographie. Die
strukturelle Einordnung wird häufig mit dem Fusion-Imaging-Verfahren durchgeführt,
bei dem die Proben separat in den Geräten vermessen und die Datensätze
retrospektiv korreliert werden [75][76]. In einem ersten Experiment wurde bereits
ein Traveling-Wave-MPI-Scanner (TWMPI) [17] mit einem Niederfeld-MRI-Scanner
kombiniert und die ersten Hybridmessung durchgeführt [15]. Der technische Aufwand,
zwei separate Geräte aufzubauen sowie die Tatsache, dass ein MRI-Gerät
bei 30mT sehr lange benötigt, diente als Motivation für ein integriertes TWMPIMRI-
Hybridsystem, bei dem das dynamische lineare Gradientenarray (dLGA) eines
TWMPI-Scanners intrinsisch das B0-Feld für ein MRI-Gerät erzeugen sollte.
Das Ziel dieser Arbeit war es, die Grundlagen für einen integrierten TWMPI-MRIHybridscanner
zu schaffen. Die Geometrie des dLGAs sollte dabei nicht verändert
werden, damit TWMPI-Messungen weiterhin ohne Einschränkungen möglich sind.
Zusammenfassend werden hier noch mal die wichtigsten Schritte und Ergebnisse
dieser Arbeit aufgezeigt.
Zu Beginn dieser Arbeit wurde mittels Magnetfeldsimulationen nach einer geeigneten
Stromverteilung gesucht, um allein mit dem dLGA ein ausreichend homogenes
Magnetfeld erzeugen zu können. Die Ergebnisse der Simulationen zeigten,
dass bereits zwei unterschiedliche Ströme in 14 der 20 Einzelspulen des dLGAs
genügten, um ein Field of View (FOV) mit der Größe 36mm x 12mm mit ausreichender
Homogenität zu erreichen. Die Homogenität innerhalb des FOVs betrug
dabei 3000 ppm. Für die angestrebte Feldstärke von 235mT waren Stromstärken
von 129A und 124A nötig.
Die hohen Ströme des dLGAs erforderten die Entwicklung eines dafür angepassten
Verstärkers. Das ursprüngliche Konzept, welches auf einem linear angesteuerten
Leistungstransistors aufbaute, wurde in zahlreichen Schritten so weit verbessert,
dass die nötigen Stromstärken stabil an- und ausgeschaltet werden konnten.
Mithilfe eines Ganzkörper-MRIs konnte erstmals das B0-Feld des dLGAs, welches
durch den selbstgebauten Verstärker erzeugt wurde, gemessen und mit der Simulation
verglichen werden. Zwischen den beiden Verläufen zeigte sich eine qualitativ
gute Übereinstimmung.
Das Finden des NMR-Signals stellte wegen des selbstgebauten Verstärkers eine
Herausforderung dar, da zu diesem Zeitpunkt die nötige Präzision noch nicht erreicht
wurde und der wichtigste Parameter, die Magnetfeldstärke im dLGA, nicht
gemessen werden konnte. Dagegen konnte die Länge der Pulse für die Spin-Echo-
Sequenz sehr gut gemessen werden, jedoch war der optimale Wert noch nicht bekannt.
Durch iterative Messungen wurden die richtigen Einstellungen gefunden,
die nach Änderungen an der Hardware jeweils angepasst wurden.
Die Performanz des Verstärkers konnte anhand wiederholter Messungen des NMRSignals
genauer untersucht werden. Es zeigte sich, dass die Präzision weiter verbessert
werden musste, um reproduzierbare Ergebnisse zu erhalten. Mithilfe des
NMR-Signals konnten auch das B0-Feld ausgemessen werden. Es zeigte eine gute
Übereinstimmung zur Simulation. Mithilfe von vier Segmentspulen des dLGAs
war es möglich einen linearen Gradienten entlang der z-Achse zu erzeugen. Ein
Gradient wurde zusätzlich zum B0-Feld geschaltet und ebenfalls ausgemessen.
Auch dieser Verlauf zeigte eine gute Übereinstimmung zur Simulation.
Mithilfe des Gradienten wurde erfolgreich die Frequenzkodierung und die Phasenkodierung
implementiert, durch die bei beiden Messungen zwei Proben anhand
des Ortes unterschieden werden konnten. Damit war die Entwicklung des MRIScanners
abgeschlossen.
Der Aufbau des TWMPI-Scanners benötigte neben dem Bau des dLGAs die Anfertigung
von Sattelspulen. Für die MPI-Messungen konnte der fehlende Teil der
Sendekette sowie die gesamte Empfangskette von einer früheren Version benutzt
werden. Auch für das MPI wurde die Funktionalität mithilfe einer Punktprobe und
eines Phantoms überprüft, allerdings hier in zwei Dimensionen.
Die Erweiterung zu einem Hybridscanner erforderte weitere Modifikationen gegenüber
einem reinen TWMPI- bzw. MRI-Scanner. Es musste ein Weg gefunden
werden, die Beschaltung des dLGAs für die jeweilige Modalität zügig anzupassen.
Dafür wurde ein Steckbrett gebaut, das es erlaubt, die Verkabelung des dLGAs in
kurzer Zeit zu ändern. Außerdem mussten innerhalb des dLGAs die Sattelspulen
und die Empfangsspule des TWMPIs sowie die Empfangsspule des MRIs untergebracht
werden. Ein modulares System erlaubte die gleichzeitige Anordnung aller
Komponenten innerhalb des dLGAs. Das messbare FOV des MRIs ist der Homogenität
des B0-Feldes angepasst, das FOV des TWMPI ist ausgedehnter.
Zum Ende dieser Arbeit wurde erfolgreich eine Hybridmessung durchgeführt. Das
Phantom bestand aus je zwei Kugeln gefüllt mit Öl und mit einem MPI-Tracer
(Resovist). Mit TWMPI war die räumliche Abbildung der Resovistkugeln möglich,
während mit MRI die der Ölkugeln möglich war. Diese in situ Messung zeigte die
erfolgreiche Umsetzung des Konzeptes für den TWMPI-MRI-Hybridscanner.
Zusammenfassend wurden in dieser Arbeit die Grundlagen für einen TWMPIMRI-
Hybridscanner gelegt. Die größte Schwierigkeit bestand darin, ein ausreichend
homogenes B0-Feld für das MRI zu erzeugen, mit dem man ein gutes NMRSignal
aufnehmen konnte. Mit einer einfachen Stromverteilung, bestehend aus zwei
unterschiedlichen Strömen, konnte ein ausreichend homogenes B0-Feld erzeugt
werden. Durch komplexere Stromverteilungen lässt sich die Homogenität noch verbessern
und somit das FOV vergrößern.
Die MRI-Bildgebung wurde in dieser Arbeit für eine Dimension implementiert und
soll in fortführenden Arbeiten auf 2D und 3D ausgedehnt werden. Letztendlich
soll anhand eines MRI-Bildes die Partikelverteilung des MPI-Tracers in Lebewesen
deren Anatomie zugeordnet werden. In [76][77][78] sind die ersten präklinischen
Anwendungen mit dem TWMPI-Scanner durchgeführt worden. Diese Anwendungen
erlangen eine höhere Aussagekraft durch die zusätzlichen Informationen eines
TWMPI-MRI-Hybridscanners.
In weiteren Arbeiten sollte zusätzlich die Größe des FOVs für das MRI erweitert
werden. Außerdem macht es Sinn, einen elektronischen Schalter zum Umschalten
des dLGAs zwischen MRI und MPI zu realisieren.
Die nächste Version des Hybridscanners könnte beispielsweise ein komplett neu
gestaltetes dLGA enthalten, in dem jede Segmentspule in radialer Richtung einmal
geteilt wird und dadurch in eine innere und eine äußere Spule zerlegt wird. Für
das MRI werden die beiden Spulenteile gegen geschaltet, um ein homogenes Feld
in radialer Richtung zu erhalten. Für das TWMPI werden die Spulenteile gleichgeschaltet,
um einen möglichst starken Feldgradienten zu erreichen.
In dieser Arbeit wurde für die nächste Version eines TWMPI-MRI-Hybridscanners
viel Wissen generiert, das äußerst hilfreich für das neue Design sein wird. Anhand
der Vermessung des B0-Feldes hat sich gezeigt, dass die simulierten Magnetfelder
gut mit den gemessenen Magnetfeldern übereinstimmen. Außerdem wurde viel
gelernt über die Kombination von TWMPI mit MRI.
Coherent Multidimensional Spectroscopy in Molecular Beams and Liquids Using Incoherent Observables
(2018)
The aim of the present work was to implement an experimental approach that enables coherent two-dimensional (2D) electronic spectroscopy of samples in various states of matter. For samples in the liquid phase, a setup was realized that utilizes the sample fluorescence for the acquisition of 2D spectra. Whereas the liquid-phase approach has been established before, coherent 2D spectroscopy on gaseous samples in a molecular beam as developed in this work is in fact a new method. It employs for the first time cations in a time-of-flight mass spectrometer for signal detection and was used to obtain the first ion-selective 2D spectra of a molecular-beam sample. Additionally, a new acquisition concept was developed in this thesis that significantly decreases measurement times in 2D spectroscopy using optimized sparse sampling and a compressed-sensing reconstruction algorithm.
Characteristic for the variant of 2D spectroscopy presented in this work is the usage of a phase-coherent sequence of four laser pulses in a fully collinear geometry for sample excitation. The pulse sequence was generated by a custom-designed pulse shaper that is capable of rapid scanning by changing the pulse parameters such as time delays and phases with the repetition rate of the laser. The sample's response was detected by monitoring incoherent observables that arise from the final-state population, for instance fluorescence or cations. Phase cycling, i.e., signal acquisition with different combinations of the relative phases of the excitation pulses, was applied to extract nonlinear signal contributions from the full signal during data analysis.
Liquid-phase 2D fluorescence spectroscopy was established with the laser dye cresyl violet as a sample molecule, confirming coherent oscillations previously observed in literature that are originating from vibronic coherences in specific regions of the 2D spectrum.
The data set of this experiment was used subsequently to introduce optimized sparse sampling in 2D spectroscopy. An optimization algorithm was implemented in order to find the best sampling pattern while taking only one quarter of the regular time-domain sampling points, thereby reducing the acquisition time by a factor of four. Signal recovery was based on a new and compact representation of 2D spectra using the von Neumann basis, which required about six times less coefficients than the Fourier basis to retain the relevant information. Successful reconstruction was shown by recovering the coherent oscillations in cresyl violet from a reduced data set.
Finally, molecular-beam coherent 2D spectroscopy was introduced with an investigation of ionization pathways in highly-excited nitrogen dioxide, revealing transitions to discrete auto-ionizing states as the dominant contribution to the ion signal. Furthermore, the advantage of the time-of-flight approach to obtain reactant and product 2D spectra simultaneously enabled the observation of distinct differences in the multiphoton-ionization response functions of the nitrogen dioxide cation and the nitrogen oxide ionic fragment.
The developed experimental techniques of this work will facilitate fast acquisition of 2D spectra for samples in various states of matter and permit reliable direct comparison of results. Therefore, they pave the way to study the properties of quantum coherences during photophysical processes or photochemical reactions in different environments.
It was the scope of this work to gain a deeper understanding of the correlation between Interface energetics of molecular semiconductors in planar organic solar cells and the corresponding optoelectronic characteristics. For this aim, different approaches were followed. At first, a direct variation of donor/acceptor (D/A) interface energetics of bilayer cells was achieved by utilizing systematically modified donor compounds. This change could be correlated to the macroscopic device performance. At second, the impact of interface energetics was illustrated, employing a more extended device architecture. By introducing a thin interlayer between a planar D/A heterojunction, an energetic staircase was established. Exciton dissociation in such devices could be linked to the cascade energy level alignment of the photo-active materials. Finally, two different fullerene molecules C60 and C70 were employed in co-evaporated acceptor phases. The expected discrepancy in their electronic structure was related to the transport properties of the corresponding organic photovoltaic cells (OPVCs). The fullerenes are created simultaneously in common synthesis procedures. Next to the photo-physical relevance, the study was carried-out to judge on the necessity of separating the components from each other by purification which constitutes the cost-determining step in the total production costs.
Dementia is a complex neurodegenerative syndrome that by 2050 could affect about 135 Million people worldwide. People with dementia experience a progressive decline in their cognitive abilities and have serious problems coping with activities of daily living, including
orientation and wayfinding tasks. They even experience difficulties in finding their way in a familiar environment. Being lost or fear of getting lost may consequently develop into other psychological deficits such as anxiety, suspicions, illusions, and aggression. Frequent results are social isolation and a reduced quality of life. Moreover, the lives of relatives and
caregivers of people with dementia are also negatively affected.
Regarding navigation and orientation, most existing approaches focus on outdoor environment and people with mild dementia, who have the capability to use mobile devices. However, Rasquin (2007) observe that even a device with three buttons may be too complicated for
people with moderate to severe dementia. In addition, people who are living in care homes mainly perform indoor activities. Given this background, we decided to focus on designing a system for indoor environments for people with moderate to severe dementia, who are unable
or reluctant to use smartphone technology.
Adopting user-centered design approach, context and requirements of people with dementia were gathered as a first step to understand needs and difficulties (especially in spatial disorientation and wayfinding problems) experienced in dementia care facilities. Then, an "Implicit Interactive Intelligent (III) Environment" for people with dementia was proposed emphasizing implicit interaction and natural interface. The backbone of this III Environment is based on supporting orientation and navigation tasks with three systems: a Monitoring system, an intelligent system, and a guiding system. The monitoring system and intelligent system automatically detect and interpret the locations and activities performed by the users i.e. people with dementia. This approach (implicit input) reduces cognitive workload as well as physical workload on the user to provide input. The intelligent system is also aware of context, predicts next situations (location, activity), and decides when to provide an appropriate service to the users. The guiding system with intuitive and dynamic environmental cues (lighting with color) has the responsibility for guiding the users to the places they need to be.
Overall, three types of a monitoring system with Ultra-Wideband and iBeacon technologies, different techniques and algorithms were implemented for different contexts of use.
They showed a high user acceptance with a reasonable price as well as decent accuracy and precision. In the intelligent system, models were built to recognize the users’ current activity, detect the erroneous activity, predict the next location and activity, and analyze the
history data, detect issues, notify them and suggest solutions to caregivers via visualized web interfaces. About the guiding systems, five studies were conducted to test and evaluate the effect of lighting with color on people with dementia. The results were promising. Although
several components of III Environment in general and three systems, in particular, are in place (implemented and tested separately), integrating them all together and employing this in the dementia context as a fully properly evaluation with formal stakeholders (people with
dementia and caregivers) are needed for the future step.
Mini Unmanned Aerial Vehicles (MUAVs) are becoming popular research platform and
drawing considerable attention, particularly during the last decade due to their afford- ability and multi-dimensional applications in almost every walk of life. MUAVs have obvious advantages over manned platforms including their much lower manufacturing and operational costs, risk avoidance for human pilots, flying safely low and slow, and realization of operations that are beyond inherent human limitations. The advancement in Micro Electro-Mechanical System (MEMS) technology, Avionics and miniaturization of sensors also played a significant role in the evolution of MUAVs. These vehicles range from simple toys found at electronic supermarkets for entertainment purpose to highly sophisticated commercial platforms performing novel assignments like offshore wind power station inspection and 3D modelling of buildings etc. MUAVs are also more environment friendly as they cause less air pollution and noise. Unmanned is therefore unmatched. Recent research focuses on use of multiple inexpensive vehicles flying together, while maintaining required relative separations, to carry out the tasks efficiently compared to a single exorbitant vehicle. Redundancy also does away the risk of loss of a single whole-mission dependent vehicle. Some of the valuable applications in the domain of cooperative control include joint load transportation, search and rescue, mobile communication relays, pesticide spraying and weather monitoring etc. Though realization of multi-UAV coupled flight is complex, however obvious advantages justify
the laborious work involved...
The focus of the work concerned the development of a series of MRI techniques that were specifically designed and optimized to obtain quantitative and spatially resolved information about characteristic parameters of the lung. Three image acquisition techniques were developed. Each of them allows to quantify a different parameter of relevant diagnostic interest for the lung, as further described below:
1) The blood volume fraction, which represents the amount of lung water in the intravascular compartment expressed as a fraction of the total lung water. This parameter is related to lung perfusion.
2) The magnetization relaxation time T\(_2\) und T*\(_2\)
, which represents the component of T\(_2\) associated with the diffusion of water molecules through the internal magnetic field gradients of the lung. Because the amplitude of these internal gradients is related to the alveolar size, T\(_2\) und T*\(_2\) can be used to obtain information about the microstructure of the lung.
3) The broadening of the NMR spectral line of the lung. This parameter depends on lung inflation and on the concentration of oxygen in the alveoli. For this reason, the spectral line broadening can be regarded as a fingerprint for lung inflation; furthermore, in combination with oxygen enhancement, it provides a measure for lung ventilation.
Die vorliegende Arbeit untersucht mit Rastertunnelmikroskopie (RTM) und -spektroskopie (RTS) die Korrelation von strukturellen, elektronischen und magnetischen Eigenschaften auf metallischen Oberflächen. Zuerst wird der spin-aufgespaltene Oberflächenzustand des Ni(111) analysiert. Anschließend geht der Fokus über auf dünne Eisenfilme, die auf Rh(001) gewachsen
wurden. Zuletzt wird die CePt$_5$/Pt(111)-Oberflächenlegierung untersucht. Nickel ist ein bekannter Ferromagnet und die (111)-Oberfläche war in der Vergangenheit schon mehrfach das Objekt theoretischer und experimenteller Studien. Trotz intensiver Bemühungen wurden inkonsistente Ergebnisse veröffentlicht und ein klares, konsistentes Bild ist noch nicht vorhanden. Aus diesem Grund wird die Ni(111)-Oberfläche mittels RTM und RTS erforscht, die den Zugang sowohl zu besetzten als auch unbesetzten Zuständen ermöglicht. Mit der Methode der Quasiteilcheninterferenz wird eine detailierte Beschreibung der Banddispersion erhalten. Die Austauschaufspaltung zwischen Minoritäts- und Majoritätsoberflächenzustands wird zu ∆E$_{ex}$ = (100 ± 8) meV ermittelt. Der Ansatzpunkt des Majoritätsbandes liegt bei E − E$_F$ = −(160 ± 8)meV und die effektive Masse beträgt m^* = +(0,14 ± 0,04)me. Des Weiteren liegt der Ansatzpunkt der Oberflächenresonanz der Majoritätladungsträger energetisch bei E−E$_F$ = −(235±5)meV mit einer effektiven Masse von m^* = +(0,36±0,05)m$_e$. Um unmissverständlich den dominierenden Spin-Kanal in der RTS zu identifizieren, wurden hexagonale Quantentröge durch reaktives Ionenätzen hergestellt und mit der Hilfe eines eindimensionalen Quantentrogmodells interpretiert. Die sechs Kanten eines Hexagons erscheinen unterschiedlich. Atomar aufgelöste Messungen zeigen, dass gegenüberliegende Kanten nicht nur eine unterschiedliche Struktur haben sondern auch unterschiedliche spektroskopische Eigenschaften, die durch einen alternierend auftauchenden oder abwesenden spektroskopischen Peak charakterisiert sind. Magnetische Messungen ergeben allerdings keine endgültigen Ergebnisse bezüglich des Ursprungs des Beobachtungen.
Das zweite experimentelle Kapitel dreht sich um dünne Eisenfilme, die auf eine saubere Rh(001)-Oberfläche aufgebracht und diese dann mit RTM, RTS und spin-polarisierter (SP- )RTM untersucht werden. Eine nahezu defektfreie Rh(001)-Oberfläche ist notwendig, um ein Wachstum der Eisenfilme mit wenigen Defekten zu erhalten. Dies ist relevant, um das magnetische Signal korrekt interpretieren zu können und den möglichen Einfluss von Adsorbaten auszuschließen. Die erste atomare Lage Fe ordnet sich antiferromagnetisch in einer c(2 × 2)-Struktur an mit der leichten Magnetisierungsachse senkrecht zur Probenoberfläche. Die zweite und dritte Lage verhält sich ferromagnetisch mit immer kleiner werdenden Domänen für steigende Bedeckung. Ab 3,5 atomaren Lagen kommt es vermutlich zu einer Änderung der leichten Magnetisierungsrichtung von vertikal zu horizontal zur Probenebene. Dies wird durch kleiner werdende Domänengrößen und den gleichzeitig breiter werdenden Domänenwänden signalisiert. Temperaturabhängige spin-polarisierter RTM erlaubt es die Curietemperatur der zweiten Lage auf 80 K zu schätzen. Zusätzlich wurde bei dieser Bedeckung eine periodische Modulation der lokalen Zustandsdichte gemessen, die mit steigender Periodizität auch auf der dritten und vierten Lage erscheint. Temperatur- und spannungsabhängige Messungen unterstützen eine Interpretation der Daten auf der Grundlage einer Ladungsdichtewelle. Ich zeige, dass die beiden für gewöhnlich konkurrierende Ordnungen (Ladungs- und magnetische Ordnung) koexistieren und sich gegenseitig beeinflussen, was theoretische Rechnungen, die in Zusammenarbeit mit F. P. Toldin und F. Assaad durchgeführt wurden, bestätigen können.
Im letzten Kapitel wurde die Oberflächenlegierung CePt$_5$/Pt(111) analysiert. Diese System bildet laut einer kürzlich erschienenen Veröffentlichung ein schweres Fermionengitter. Von der sauberen Pt(111)-Oberfläche ausgehend wurde die Oberflächenlegierung CePt$_5$/Pt(111) hergestellt. Die Dicke der Legierung (t in u.c.) lässt sich durch die aufgedampfte Menge an Cer variieren und die erzeugte Oberfläche wurde mit RTM und RTS für verschiedene Dicken unter- sucht. RTM-Bilder und LEED (engl.: low energy electron diffraction)-Daten zeigen konsistente Ergebnisse, die in Zusammenarbeit mit C. Praetorius analysiert wurden. Für Bedeckungen unter einer atomaren Lage Cer konnte keine geordnete Struktur mit dem RTM beobachtet werden. Für 2 u.c. wurde eine (2 × 2)-Rekonstruktion an der Oberfläche gemessen und für 3 u.c. CePt$_5$ wurde eine (3√3×3√3)R30◦-Rekonstruktion beobachtet. Der Übergang von 3 u.c. CePt5 zu 5 u.c. CePt$_5$ wurde untersucht. Mit Hilfe eines Strukturmodells schließe ich, dass es weder zu einer Rotation des atomaren Gitters noch zu einer Rotation des Übergitters kommt. Ab einer Bedeckung von 6 u.c. CePt5 erscheint eine weitere Komponente der CePt$_5$-Oberflächenlegierung, die keine Rekonstruktion mehr besitzt. Das atomare Gitter verläuft wieder entlang der kris- tallographischen Richtungen des Pt(111)-Kristalls und ist somit nicht mehr um 30^° gedreht. Für alle Bedeckungen wurden Spektroskopiekurven aufgenommen, die keinen Hinweis auf ein kohärentes schweres Fermionensystem geben. Eine Erklärung hierfür kommt aus einer LEED-IV Studie, die besagt, dass jede gemessene Oberfläche mit einer Pt(111)-Schicht terminiert ist. Das RTM ist sensitiv für die oberste Schicht und somit wäre der Effekt eines kohärenten schweren Fermionensystems nicht unbedingt messbar.
Die vorliegende Arbeit umfasst die Synthese, die Untersuchung von Struktur-Eigenschafts-Beziehungen und Eigenschaftsmodifikationen von Komplexen und Koordinationspolymeren basierend auf den 3d-Übergangsmetallchloriden von Mn, Fe, Co sowie Zn und N-heterozyklischen Liganden.
Durch die Kombination von mechanochemische Umsetzungen, mikrowellenassistierten Synthesen, solvensassistierten, solvothermalen und solvensfreien Reaktionen zu verschiedenen Synthesestrategien wurden 23 neue Koordinationsverbindungen synthetisiert und charakterisiert.
Ausgehend von den auf mechanochemischem Weg synthetisierten, monomeren Precursor-Komplexen [MCl2(TzH)4] (M = Mn und Fe) konnten die höhervernetzten Koordinationspolymere 1∞[FeCl(TzH)2]Cl und 1∞[MCl2(TzH)] (M = Fe und Mn) durch thermische und mikrowelleninduzierte Konversionsreaktionen als phasenreine Bulkprodukte erhalten werden. Die sukzessive Abgabe organischer Liganden und die damit verbundene Umwandlung in die höhervernetzten Spezies wurden dabei mittels temperaturabhängiger Pulverdiffraktometrie und simultanem DTA/TG-Verfahren analysiert.
Durch gezielte Variation der Lösungsmittel beim Liquid-assisted grinding, der mechanochemischen Synthese unter Zugabe einer flüssigen Phase, konnten die beiden polymorphen Koordinationspolymere α-1∞[MnCl2(BtzH)2] und β-1∞[MnCl2(BtzH)2] erhalten werden, die im monoklinen bzw. orthorhombischen Kristallsystem kristallisieren.
Solvensassistierte Umsetzungen von MnCl2 mit 1,2,4-1H-Triazol (TzH) unter Zugabe von Hilfsbasen resultierten unter anderem in der Bildung der dreidimensionalen Koordinationspolymere 3∞[MnCl(Tz)(TzH)] und 3∞{[Mn5Cl3(Tz)7(TzH)2]}2·NEt3HCl.
Die Untersuchung von Struktur-Eigenschafts-Korrelationen erfolgte systematisch an ausgewählten Verbindungen hinsichtlich ihrer dielektrischen Eigenschaften. Dabei wurden die Einflüsse intra- und intermolekularer Wechselwirkungen auf die strukturelle Rigidität und die daraus folgenden Polarisierbarkeitseigenschaften analysiert und miteinander verglichen. Die gemessenen dielektrischen Konstanten erstrecken sich von Werten im high-k-Bereich für monomere Komplexe bis hin zu den nahezu frequenzunabhängigen low-k-Werten der eindimensionalen Koordinationspolymere 1∞[MnCl2(TzH)] und 1∞[MnCl2(BtzH)2] sowie der Komplexe [ZnCl2(TzH)2] und [ZnCl2(BtzH)2]·BtzH.
Eigenschaftsmodifikationen und -optimierungen der synthetisierten Verbindungen er-folgten zum einen durch Erzeugung flexibler Kunststofffilme, in welche die eindimensionalen Koordinationspolymere 1∞[MCl2(TzH)] (M = Fe und Mn) eingebettet wurden. Zum anderen konnten in mechanochemischen Umsetzungen superparamagnetische Kompositpartikel bestehend aus einem Fe3O4/SiO2-Kern und einer kristallinen [ZnCl2(TzH)2]-Hülle erhalten werden, die in situ aus den Edukten ZnCl2 und TzH synthetisiert wurde.
In der heutigen Strahlentherapie kann durch eine am Linearbeschleuniger integrierte
Röntgenröhre eine 3D-Bildgebung vor der Bestrahlung durchgeführt werden. Die
sogenannte Kegel-Strahl-CT (Cone-Beam-CT, CBCT) erlaubt eine präzise Verifikation
der Patientenlagerung sowie ein Ausgleich von Lagerungsungenauigkeiten. Dem
Nutzen der verbesserten Patientenlagerung steht jedoch bei täglicher Anwendung eine
erhöhte, nicht zu vernachlässigbare Strahlenexposition des Patienten gegenüber. Eine
Verringerung des Dosisbeitrages bei der CBCT-Bildgebung lässt sich durch
Reduzierung des Stroms zur Erzeugung der Röntgenstrahlung sowie durch
Verringerung der Anzahl an Projektionen erreichen. Die so aufgenommen Projektionen
lassen sich dann aber nur durch aufwendige Rekonstruktionsverfahren zu qualitativ
hochwertigen Bilddatensätzen rekonstruieren. Ein Verfahren, dass für die
Rekonstruktion vorab vorhandene Vorwissensbilder verwendet, ist der Prior-Image-
Constrained-Compressed-Sensing-Rekonstruktionsalgorithmus (PICCS). Die Rekonstruktionsergebnisse
des PICCS-Verfahrens übertreffen die Ergebnisse des auf den
konventionellen Feldkamp-Davis-Kress-Algorithmus (FDK) basierenden Verfahrens,
wenn nur eine geringe Anzahl an Projektionen zur Verfügung steht. Allerdings können
bei dem PICCS-Verfahren derzeit keine großen Variationen in den Vorwissensbildern
berücksichtigt werden und führen zu einer geringeren Bildqualität. Diese Variationen
treten insbesondere durch anatomische Veränderungen wie Tumorverkleinerung oder
Gewichtsveränderungen auf. Das Ziel der vorliegenden Arbeit bestand folglich darin,
einen neuen vorwissensbasierten Rekonstruktionsalgorithmus zu entwickeln, der auf
Basis des PICCS-Verfahrens zusätzlich die Verwendung von lokalen
Verlässlichkeitsinformationen über das Vorwissensbild ermöglicht, um damit die
Variationen in den Vorwissensbildern bei der Rekonstruktion entsprechend
berücksichtigen zu können.
Die grundlegende Idee des neu entwickelten Rekonstruktionsverfahrens ist die
Annahme, dass die Vorwissensbilder aus Bereichen mit kleinen und großen Variationen
bestehen. Darauf aufbauend wird eine Gewichtungsmatrix erzeugt, die die Stärke der
Variationen des Vorwissens im Rekonstruktionsalgorithmus berücksichtigt. In
Machbarkeitsstudien wurde das neue Verfahren hinsichtlich der Verbesserung der Bildqualität unter Berücksichtigung gängiger Dosisreduzierungsstrategien untersucht.
Dazu zählten die Reduktion der Anzahl der Projektionen, die Akquisition von
Projektionen mit kleinerer Fluenz sowie die Verkleinerung des Akquisitionsbereiches.
Die Studien erfolgten an einem Computerphantom sowie insbesondere an
experimentellen Daten, die mit dem klinischen CBCT aufgenommen worden sind. Zum
Vergleich erfolgte die Rekonstruktion mit dem Standardverfahren basierend auf der
gefilterten Rückprojektion, dem Compressed Sensing- sowie dem konventionellen
PICCS-Verfahren.
Das neue Verfahren konnte in den untersuchten Fällen Bilddatensätze mit verbesserter
bis ausgezeichneter Qualität rekonstruieren, sogar dann, wenn nur eine sehr geringe
Anzahl an Projektionen oder nur Projektionen mit starkem Rauschen zur Verfügung
standen. Demgegenüber wiesen die Rekonstruktionsergebnisse der anderen
Algorithmen starke Artefakte auf. Damit eröffnet das neu entwickelte Verfahren die
Möglichkeit durch die Integration von Zuverlässigkeitsinformationen über die
vorhandenen Vorwissensbildern in den Rekonstruktionsalgorithmus, den Dosisbeitrag
bei der täglichen CBCT-Bildgebung zu minimieren und eine ausgezeichnete
Bildqualität erzielen zu können.
Time-resolved spectroscopy allows for analyzing light-induced energy conversion and
chromophore–chromophore interactions in molecular systems, which is a prerequisite in
the design of new materials and for improving the efficiency of opto-electronic devices.
To elucidate photo-induced dynamics of complex molecular systems, transient absorption
(TA) and coherent two-dimensional (2D) spectroscopy were employed and combined
with additional experimental techniques, theoretical approaches, and simulation models
in this work.
A systematic series of merocyanines, synthetically varied in the number of chromophores
and subsitution pattern, attached to a benzene unit was investigated in cooperation with
the group of Prof. Dr. Frank Würthner at the University of Würzburg. The global analysis
of several TA experiments, and additional coherent 2D spectroscopy experiments, provided
the basis to elaborate a relaxation scheme which was applicable for all merocyanine
systems under investigation. This relaxation scheme is based on a double minimum on the
excited-state potential energy surface. One of these minima is assigned to an intramolecular
charge-transfer state which is stabilized in the bis- and tris-chromophoric dyes by
chromphore–chromophore interactions, resulting in an increase in excited-state lifetime.
Electro-optical absorption and density functional theory (DFT) calculations revealed a
preferential chromophore orientation which compensates most of the dipole moment of
the individual chromophores. Based on this structural assignment the conformationdependent
exciton energy splitting was calculated. The linear absorption spectra of the
multi-chromophoric merocyanines could be described by a combination of monomeric and
excitonic spectra.
Subsequently, a structurally complex polymeric squaraine dye was studied in collaboration
with the research groups of Prof. Dr. Christoph Lambert and Prof. Dr. Roland Mitric
at the University of Würzburg. This polymer consists of a superposition of zigzag and
helix structures depending on the solvent. High-level DFT calculations confirmed the previous
assignment that zigzag and helix structures can be treated as J- and H-aggregates,
respectively. TA experiments revealed that in dependence on the solvent as well as the
excitation energy, ultrafast energy transfer within the squaraine polymer proceeds from
initially excited helix segments to zigzag segments or vice versa. Additionally, 2D spectroscopy
confirmed the observed sub-picosecond dynamics. In contrast to other conjugated
polymers such as MEH-PPV, which is investigated in the last chapter, ultrafast
energy transfer in squaraine polymers is based on the matching of the density of states
between donor and acceptor segments due to the small reorganization energy in cyanine-like
chromophores.
Finally, the photo-induced dynamics of the aggregated phase of the conjugated polymer
MEH-PPV was investigated in cooperation with the group of Prof. Dr. Anna Köhler at the University of Bayreuth. Our collaborators had previously described the aggregation of MEH-PPV upon cooling by the formation of so-called HJ-aggregates based on exciton
theory. By TA measurements and by making use of an affiliated band analysis distinct
relaxation processes in the excited state and to the ground state were discriminated. By
employing 2D spectroscopy the energy transfer between different conjugated segments
within the aggregated polymer was resolved. The initial exciton relaxation within the
aggregated phase indicates a low exciton mobility, in contrast to the subsequent energy
transfer between different chromophores within several picoseconds.
This work contributes by its systematic study of structure-dependent relaxation dynamics
to the basic understanding of the structure-function relationship within complex
molecular systems. The investigated molecular classes display a high potential to increase
efficiencies of opto-electronic devices, e.g., organic solar cells, by the selective choice of
the molecular morphology.
Within the framework of this thesis, photolysis reactions in the liquid phase were investigated by means of ultrafast optical spectroscopy. Apart from molecular studies dealing with the highly spin-dependent reactivity of diphenylcarbene (DPC) in binary solvent
mixtures and ligand dissociation reactions of so-called CO-releasing molecules (CORMs),
special emphasis was put on the implementation and characterization of methods improving
and extending the signal detection in conventional pump–probe transient absorption setups.
The assumption of DPC being an archetypal triplet-ground-state arylcarbene was recently questioned by matrix-isolation studies at low temperatures. DPC embedded in argon matrices revealed a hitherto unknown reactivity when the carbene environment was modified by small amounts of methanol dopant molecules. To complement these findings with liquid-phase experiments at room temperature, femtosecond pump–probe transient absorption spectroscopy with probing in the visible and ultraviolet regime was employed to unravel primary reaction processes of DPC in solvent mixtures. Supported by quantum chemical simulations conducted by our collaborators, it was shown that a competition between the reaction pathways occurs that not only depends on the solvent molecule near-by but also on its interaction with other solvent molecules. In-depth analysis of the solvation dynamics and the amount of nascent intermediates corroborates the importance of a hydrogen-bonded complex with a protic solvent molecule, in striking analogy to complexes found at cryogenic temperatures.
Probing the transient absorption of molecules in the mid-infrared spectral range benefits from the high chemical specificity of molecules’ vibrational signatures. The technique of chirped-pulse upconversion (CPU) constitutes a promising alternative to standard direct multichannel MCT detection when accessing this spectral detection window. Hence, one chapter of this thesis is dedicated to a direct comparison between both detection methods. By conducting an exemplary pump–probe transient absorption experiment, it became evident, that the additional nonlinear interaction step is responsible for increased noise levels when using CPU. However, a correction procedure capable of removing these additional noise contributions—stemming from the fundamental laser radiation used for upconversion—was successfully tested. Perhaps most importantly for various spectroscopic applications, CPU scored with a significantly extended detection bandwidth owing to the high pixel numbers of modern CCD cameras.
Transition-metal complexes capable of releasing small molecular messengers upon photoactivation are promising sources of gasotransmitters such as carbon monoxide (CO) or nitric oxide (NO) in biological applications. However, only little is known about the characteristic time scales of ligand dissociation in this class of molecules. For this purpose, two complexes were investigated with femtosecond time resolution: [Mn(CO)3(tpm)]Cl with tpm=tris(2-pyrazolyl)methane, a manganese tricarbonyl complex which has proven to be selective and cytotoxic to cancer cells, and [Mo(CO)2(NO)(iPr3tacn)]PF6 with iPr3tacn=1,4,7-triisopropyl-1,4,7-triazacyclononane, a molybdenum complex containing both carbonyl and nitrosyl ligands. By conducting pump–probe transient absorption measurements in different spectral probing windows supported by quantum chemical calculations and linear absorption spectroscopy, it was shown that both complexes are able to release one CO ligand within the first few picoseconds after UV excitation. The results complement existing studies which focused on the molecules’ ligand-releasing properties upon long-term exposure. The additional information gained on an ultrafast time scale provides a comprehensive understanding of individual reaction steps connected with ligand release in this class of molecules. Hence, the studies might create new incentives to develop modified molecules for specific applications.
This work brings forward successful implementations of ultrafast chirality-sensitive spectroscopic techniques by probing circular dichroism (CD) or optical rotation dispersion (ORD). Furthermore, also first steps towards chiral quantum control, i.e., the selective variation of the chiral properties of molecules with the help of coherent light, are presented.
In the case of CD probing, a setup capable of mirroring an arbitrary polarization state of an ultrashort laser pulse was developed. Hence, by passing a left-circularly polarized laser pulse through this setup a right-circularly polarized laser pulse is generated. These two pulse enantiomers can be utilized as probe pulses in a pump--probe CD experiment. Besides CD spectroscopy, it can be utilized for anisotropy or ellipsometry spectroscopy also. Within this thesis, the approach is used to elucidate the photochemistry of hemoglobin, the oxygen transporting protein in mammalian blood. The oxygen loss can be triggered with laser pulses as well, and the results of the time-resolved CD experiment suggest a cascade-like relaxation, probably through different spin states, of the metallo-porphyrins in hemoglobin.
The ORD probing was realized via the combination of common-path optical heterodyne interferometric polarimetry and accumulative femtosecond spectroscopy. Within this setup, on the one hand the applicability of this approach for ultrafast studies was demonstrated explicitly. On the other hand, the discrimination between an achiral and a racemic solution without prior spatial separation was realized. This was achieved by inducing an enantiomeric excess via polarized femtosecond laser pulses and following its evolution with the developed polarimeter. Hence, chiral selectivity was already achieved with this method which can be turned into chiral control if the polarized laser pulses are optimized to steer an enhancement of the enantiomeric excess.
Furthermore, within this thesis, theoretical prerequisites for anisotropy-free pump--probe experiments with arbitrary polarized laser pulses were derived. Due to the small magnitude of optical chirality-sensitve signals, these results are important for any pump--probe chiral spectroscopy, like the CD probing presented in this thesis. Moreover, since for chiral quantum control the variation of the molecular structure is necessary, the knowledge about rearrangement reactions triggered by photons is necessary. Hence, within this thesis the ultrafast Wolff rearrangement of an α-diazocarbonyl was investigated via ultrafast photofragment ion spectroscopy in the gas phase. Though the compound is not chiral, the knowledge about the exact reaction mechanism is beneficial for future studies of chiral compounds.
The ecosystem of the high northern latitudes is affected by the recently changing environmental conditions. The Arctic has undergone a significant climatic change over the last decades. The land coverage is changing and a phenological response to the warming is apparent. Remotely sensed data can assist the monitoring and quantification of these changes. The remote sensing of the Arctic was predominantly carried out by the usage of optical sensors but these encounter problems in the Arctic environment, e.g. the frequent cloud cover or the solar geometry. In contrast, the imaging of Synthetic Aperture Radar is not affected by the cloud cover and the acquisition of radar imagery is independent of the solar illumination. The objective of this work was to explore how polarimetric Synthetic Aperture Radar (PolSAR) data of TerraSAR-X, TanDEM-X, Radarsat-2 and ALOS PALSAR and interferometric-derived digital elevation model data of the TanDEM-X Mission can contribute to collect meaningful information on the actual state of the Arctic Environment. The study was conducted for Canadian sites of the Mackenzie Delta Region and Banks Island and in situ reference data were available for the assessment. The up-to-date analysis of the PolSAR data made the application of the Non-Local Means filtering and of the decomposition of co-polarized data necessary.
The Non-Local Means filter showed a high capability to preserve the image values, to keep the edges and to reduce the speckle. This supported not only the suitability for the interpretation but also for the classification. The classification accuracies of Non-Local Means filtered data were in average +10% higher compared to unfiltered images. The correlation of the co- and quad-polarized decomposition features was high for classes with distinct surface or double bounce scattering and a usage of the co-polarized data is beneficial for regions of natural land coverage and for low vegetation formations with little volume scattering. The evaluation further revealed that the X- and C-Band were most sensitive to the generalized land cover classes. It was found that the X-Band data were sensitive to low vegetation formations with low shrub density, the C-Band data were sensitive to the shrub density and the shrub dominated tundra. In contrast, the L-Band data were less sensitive to the land cover. Among the different dual-polarized data the HH/VV-polarized data were identified to be most meaningful for the characterization and classification, followed by the HH/HV-polarized and the VV/VH-polarized data. The quad-polarized data showed highest sensitivity to the land cover but differences to the co-polarized data were small. The accuracy assessment showed that spectral information was required for accurate land cover classification. The best results were obtained when spectral and radar information was combined. The benefit of including radar data in the classification was up to +15% accuracy and most significant for the classes wetland and sparse vegetated tundra. The best classifications were realized with quad-polarized C-Band and multispectral data and with co-polarized X-Band and multispectral data. The overall accuracy was up to 80% for unsupervised and up to 90% for supervised classifications. The results indicated that the shortwave co-polarized data show promise for the classification of tundra land cover since the polarimetric information is sensitive to low vegetation and the wetlands. Furthermore, co-polarized data provide a higher spatial resolution than the quad-polarized data.
The analysis of the intermediate digital elevation model data of the TanDEM-X showed a high potential for the characterization of the surface morphology. The basic and relative topographic features were shown to be of high relevance for the quantification of the surface morphology and an area-wide application is feasible. In addition, these data were of value for the classification and delineation of landforms. Such classifications will assist the delineation of geomorphological units and have potential to identify locations of actual and future morphologic activity.
Numerical Simulations of Heavy Fermion Systems: From He-3 Bilayers to Topological Kondo Insulators
(2014)
Even though heavy fermion systems have been studied for a long time, a strong interest in heavy fermions persists to this day. While the basic principles of local moment formation, Kondo effect and formation of composite quasiparticles leading to a Fermi liquid, are under- stood, there remain many interesting open questions. A number of issues arise due to the interplay of heavy fermion physics with other phenomena like magnetism and superconduc- tivity.
In this regard, experimental and theoretical investigations of He-3 can provide valuable insights. He-3 represents a unique realization of a quantum liquid. The fermionic nature of He-3 atoms, in conjunction with the absence of long-range Coulomb repulsion, makes this material an ideal model system to study Fermi liquid behavior.
Bulk He-3 has been investigated for quite some time. More recently, it became possible to prepare and study layered He-3 systems, in particular single layers and bilayers. The pos- sibility of tuning various physical properties of the system by changing the density of He-3 and using different substrate materials makes layers of He-3 an ideal quantum simulator for investigating two-dimensional Fermi liquid phenomenology.
In particular, bilayers of He-3 have recently been found to exhibit heavy fermion behavior. As a function of temperature, a crossover from an incoherent state with decoupled layers to a coherent Fermi liquid of composite quasiparticles was observed. This behavior has its roots in the hybridization of the two layers. The first is almost completely filled and subject to strong correlation effects, while the second layer is only partially filled and weakly correlated. The quasiparticles are formed due to the Kondo screening of localized moments in the first layer by the second-layer delocalized fermions, which takes place at a characteristic temperature scale, the coherence scale Tcoh.
Tcoh can be tuned by changing the He-3 density. In particular, at a certain critical filling,
the coherence scale is expected to vanish, corresponding to a divergence of the quasiparticle effective mass, and a breakdown of the Kondo effect at a quantum critical point. Beyond the critical point, the layers are decoupled. The first layer is a local moment magnet, while the second layer is an itinerant overlayer.
However, already at a filling smaller than the critical value, preempting the critical point, the onset of a finite sample magnetization was observed. The character of this intervening phase remained unclear.
Motivated by these experimental observations, in this thesis the results of model calcula- tions based on an extended Periodic Anderson Model are presented. The three particle ring exchange, which is the dominant magnetic exchange process in layered He-3, is included in the model. It leads to an effective ferromagnetic interaction between spins on neighboring sites. In addition, the model incorporates the constraint of no double occupancy by taking the limit of large local Coulomb repulsion.
By means of Cellular DMFT, the model is investigated for a range of values of the chemical potential µ and inverse temperature β = 1/T . The method is a cluster extension to the Dy- namical Mean-Field Theory (DMFT), and allows to systematically include non-local correla- tions beyond the DMFT. The auxiliary cluster model is solved by a hybridization expansion CTQMC cluster solver, which provides unbiased, numerically exact results for the Green’s function and other observables of interest.
As a first step, the onset of Fermi liquid coherence is studied. At low enough temperature, the self-energy is found to exhibit a linear dependence on Matsubara frequency. Meanwhile, the spin susceptibility crossed over from a Curie-Weiss law to a Pauli law. Both observations serve as fingerprints of the Fermi liquid state.
The heavy fermion state appears at a characteristic coherence scale Tcoh. This scale depends strongly on the density. While it is rather high for small filling, for larger filling Tcoh is increas- ingly suppressed. This involves a decreasing quasiparticle residue Z ∼ Tcoh and an enhanced mass renormalization m∗/m ∼ Tcoh−1. Extrapolation leads to a critical filling, where the co-
herence scale is expected to vanish at a quantum critical point. At the same time, the effective mass diverges. This corresponds to a breakdown of the Kondo effect, which is responsible for the formation of quasiparticles, due to a vanishing of the effective hybridization between the layers.
Taking only single-site DMFT results into account, the above scenario seems plausible. However, paramagnetic DMFT neglects the ring exchange interaction completely. In or- der to improve on this, Cellular DMFT simulations are conducted for small clusters of size Nc = 2 and 3. The results paint a different physical picture. The ring exchange, by favor- ing a ferromagnetic alignment of spins, competes with the Kondo screening. As a result, strong short-range ferromagnetic fluctuations appear at larger values of µ. By lowering the temperature, these fluctuations are enhanced at first. However, for T < Tcoh they are increas- ingly suppressed, which is consistent with Fermi liquid coherence. However, beyond a certain threshold value of µ, fluctuations persist to the lowest temperatures. At the same time, while not apparent in the DMFT results, the total occupation n increases quite strongly in a very narrow range around the same value of µ. The evolution of n with µ is always continuous, but hints at a discontinuity in the limit Nc → ∞. This first-order transition breaks the Kondo effect. Beyond the transition, a ferromagnetic state in the first layer is established, and the second layer becomes a decoupled overlayer.
These observations provide a quite appealing interpretation of the experimental results. As a function of chemical potential, the Kondo breakdown quantum critical point is preempted by a first-order transition, where the layers decouple and the first layer turns into a ferromagnet. In the experimental situation, where the filling can be tuned directly, the discontinuous transition is mirrored by a phase separation, which interpolates between the Fermi liquid ground state at lower filling and the magnetic state at higher filling. This is precisely the range of the intervening phase found in the experiments, which is characterized by an onset of a finite sample magnetization.
Besides the interplay of heavy fermion physics and magnetic exchange, recently the spin- orbit coupling, which is present in many heavy fermion materials, attracted a lot of interest. In the presence of time-reversal symmetry, due to spin-orbit coupling, there is the possibility of a topological ground state.
It was recently conjectured that the energy scale of spin-orbit coupling can become dom- inant in heavy fermion materials, since the coherence scale and quasiparticle bandwidth are rather small. This can lead to a heavy fermion ground state with a nontrivial band topology; that is, a topological Kondo insulator (TKI). While being subject to strong correlation effects, this state must be adiabatically connected to a non-interacting, topological state.
The idea of the topological ground state realized in prototypical Kondo insulators, in par- ticular SmB6, promises to shed light on some of the peculiarities of these materials, like a residual conductivity at the lowest temperatures, which have remained unresolved so far.
In this work, a simple two-band model for two-dimensional topological Kondo insulators is devised, which is based on a single Kramer’s doublet coupled to a single conduction band. The model is investigated in the presence of a Hubbard interaction as a function of interaction strength U and inverse temperature β. The bulk properties of the model are obtained by DMFT, with a hybridization expansion CTQMC impurity solver. The DMFT approximation of a local self-energy leads to a very simple way of computing the topological invariant.
The results show that with increasing U the system can be driven through a topological phase transition. Interestingly, the transition is between distinct topological insulating states, namely the Γ-phase and M-phase. This appearance of different topological phases is possible due to the symmetry of the underlying square lattice. By adiabatically connecting both in- teracting states with the respective non-interacting state, it is shown that the transition indeed drives the system from the Γ-phase to the M-phase.
A different behavior can be observed by pushing the bare position of the Kramer’s doublet to higher binding energies. In this case, the non-interacting starting point has a trivial band topology. By switching on the interaction, the system can be tuned through a quantum phase transition, with a closing of the band gap. Upon reopening of the band gap, the system is in the Γ-phase, i. e. a topological insulator. By increasing the interaction strength further, the system moves into a strongly correlated regime. In fact, close to the expected transition to the M phase, the mass renormalization becomes quite substantial. While absent in the para- magnetic DMFT simulations conducted, it is conceivable that instead of a topological phase transition, the system undergoes a time-reversal symmetry breaking, magnetic transition.
The regime of strong correlations is studied in more detail as a function of temperature, both in the bulk and with open boundary conditions. A quantity which proved very useful is the bulk topological invariant Ns, which can be generalized to finite interaction strength and temperature. In particular, it can be used to define a temperature scale T ∗ for the onset of the topological state. Rescaling the results for Ns, a nice data collapse of the results for different values of U, from the local moment regime to strongly mixed valence, is obtained. This hints at T ∗ being a universal low energy scale in topological Kondo insulators. Indeed, by comparing T ∗ with the coherence scale extracted from the self-energy mass renormalization, it is found that both scales are equivalent up to a constant prefactor. Hence, the scale T ∗ obtained from the temperature dependence of topological properties, can be used as an independent measure for Fermi liquid coherence. This is particularly useful in the experimentally relevant mixed valence regime, where charge fluctuations cannot be neglected. Here, a separation of the energy scales related to spin and charge fluctuations is not possible.
The importance of charge fluctuations becomes evident in the extent of spectral weight transfer as the temperature is lowered. For mixed valence, while the hybridization gap emerges, a substantial amount of spectral weight is shifted from the vicinity of the Fermi level to the lower Hubbard band. In contrast, this effect is strongly suppressed in the local moment regime.
In addition to the bulk properties, the spectral function for open boundaries is studied as a function of temperature, both in the local moment and mixed valence regime. This allows an investigation of the emergence of topological edge states with temperature. The method used here is the site-dependent DMFT, which is a generalization of the conventional DMFT to inhomogeneous systems. The hybridization expansion CTQMC algorithm is used as impurity solver.
By comparison with the bulk results for the topological quantity Ns, it is found that the
temperature scale for the appearance of the topological edge states is T ∗, both in the mixed valence and local moment regime.