Graduate School of Science and Technology
Refine
Has Fulltext
- yes (7)
Is part of the Bibliography
- yes (7)
Year of publication
- 2018 (7) (remove)
Document Type
- Doctoral Thesis (7)
Keywords
- Arterie (1)
- Artery (1)
- Asymptotic Preserving (1)
- Atmosphäre (1)
- Coherent Multidimensional Spectroscopy (1)
- Computersimulation (1)
- Elasticity tensor (1)
- Elastizitätstensor (1)
- Feldeffekttransistor (1)
- Femtosekundenlaser (1)
Institute
Sonstige beteiligte Institutionen
Almost once a week broadcasts about earthquakes, hurricanes, tsunamis, or forest fires are filling the news. While oneself feels it is hard to watch such news, it is even harder for rescue troops to enter such areas. They need some skills to get a quick overview of the devastated area and find victims. Time is ticking, since the chance for survival shrinks the longer it takes till help is available. To coordinate the teams efficiently, all information needs to be collected at the command center. Therefore, teams investigate the destroyed houses and hollow spaces for victims. Doing so, they never can be sure that the building will not fully collapse while they
are inside. Here, rescue robots are welcome helpers, as they are replaceable and make work more secure. Unfortunately, rescue robots are not usable off-the-shelf, yet.
There is no doubt, that such a robot has to fulfil essential requirements to successfully accomplish a rescue mission. Apart from the mechanical requirements it has to be able to build
a 3D map of the environment. This is essential to navigate through rough terrain and fulfil manipulation tasks (e.g. open doors). To build a map and gather environmental information, robots are equipped with multiple sensors. Since laser scanners produce precise measurements and support a wide scanning range, they are common visual sensors utilized for mapping.
Unfortunately, they produce erroneous measurements when scanning transparent (e.g. glass, transparent plastic) or specular reflective objects (e.g. mirror, shiny metal). It is understood that such objects can be everywhere and a pre-manipulation to prevent their influences is impossible. Using additional sensors also bear risks.
The problem is that these objects are occasionally visible, based on the incident angle of the laser beam, the surface, and the type of object. Hence, for transparent objects, measurements might result from the object surface or objects behind it. For specular reflective objects, measurements might result from the object surface or a mirrored object. These mirrored objects are illustrated behind the surface which is wrong. To obtain a precise map, the surfaces need to
be recognised and mapped reliably. Otherwise, the robot navigates into it and crashes. Further, points behind the surface should be identified and treated based on the object type. Points behind a transparent surface should remain as they represent real objects. In contrast, Points behind a specular reflective surface should be erased. To do so, the object type needs to be classified. Unfortunately, none of the current approaches is capable to fulfil these requirements.
Therefore, the following thesis addresses this problem to detect transparent and specular reflective objects and to identify their influences. To give the reader a start up, the first chapters
describe: the theoretical background concerning propagation of light; sensor systems applied for range measurements; mapping approaches used in this work; and the state-of-the-art concerning detection and identification of transparent and specular reflective objects. Afterwards, the Reflection-Identification-Approach, which is the core of subject thesis is presented. It describes 2D and a 3D implementation to detect and classify such objects. Both are available as ROS-nodes. In the next chapter, various experiments demonstrate the applicability and reliability of these nodes. It proves that transparent and specular reflective objects can be detected and classified. Therefore, a Pre- and Post-Filter module is required in 2D. In 3D, classification is possible solely with the Pre-Filter. This is due to the higher amount of measurements. An
example shows that an updatable mapping module allows the robot navigation to rely on refined maps. Otherwise, two individual maps are build which require a fusion afterwards. Finally, the
last chapter summarizes the results and proposes suggestions for future work.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
In order to shrink the size of semiconductor devices and improve their
efficiency at the same time, silicon-based semiconductor devices have
been engineered, until the material almost reaches its performance
limits. As the candidate to be used next in semiconducting devices,
single-wall carbon nanotubes show a great potential due to their
promise of increased device efficiency and their high charge carrier
mobilities in the nanometer size active areas. However, there are
material based problems to overcome in order to imply SWNTs in the
semiconductor devices. SWNTs tend to aggregate in bundles and it is
not trivial to obtain an electronically or chirally homogeneous SWNT
dispersion and when it is done, a homogeneous thin film needs to be
produced with a technique that is practical, easy and scalable. This
work was aimed to solve both of these problems.
In the first part of this study, six different polymers, containing
fluorene or carbazole as the rigid part and bipyridine, bithiophene or
biphenyl as the accompanying copolymer unit, were used to selectively
disperse semiconducting SWNTs. With the data obtained from
absorption and photoluminescence spectroscopy of the corresponding
dispersions, it was found out that the rigid part of the copolymer plays a
primary role in determining its dispersion efficiency and electronic
sorting ability. Within the two tested units, carbazole has a higher π
electron density. Due to increased π−π interactions, carbazole
containing copolymers have higher dispersion efficiency. However, the
electronic sorting ability of fluorene containing polymers is superior.
Chiral selection of the polymers in the dispersion is not directly
foreseeable from the selection of backbone units. At the end, obtaining a monochiral dispersion is found to be highly dependent on the used raw
material in combination to the preferred polymer.
Next, one of the best performing polymers due to high chirality
enrichment and electronic sorting ability was chosen in order to
disperse SWNTs. Thin films of varying thickness between 18 ± 5 to
755o±o5 nm were prepared using vacuum filtration wet transfer method
in order to analyze them optically and electronically.
The scalability and efficiency of the integrated thin film production
method were shown using optical, topographical and electronic
measurements. The relative photoluminescence quantum yield of the
radiative decay from the SWNT thin films was found to be constant for
the thickness scale. Constant roughness on the film surface and linearly
increasing concentration of SWNTs were also supporting the scalability
of this thin film production method. Electronic measurements on bottom
gate top contact transistors have shown an increasing charge carrier
mobility for linear and saturation regimes. This was caused by the
missing normalization of the mobility for the thickness of the active
layer. This emphasizes the importance of considering this dimension for
comparison of different field effect transistor mobilities.
This thesis will outline studies performed on the fluorescence dynamics of phenyl-benzo-
[c]-tetrazolo-cinnolium chloride (PTC) in alcoholic solutions with varying viscosity using
time-resolved fluoro-spectroscopic methods. Furthermore, the properties of femtosecond
Laguerre-Gaussian (LG) laser pulses will be investigated with respect to their temporal
and spatial features and an approach will be developed to measure and control the spatial
intensity distribution on the time scale of the pulse.
Tetrazolium salts are widely used in biological assays for their low oxidation and reduction
thresholds and spectroscopic properties. However, a neglected feature in these applications
is the advantage that detection of emitted light has over the determination of the
absorbance. To corroborate this, PTC as one of the few known fluorescent tetrazolium
salts was investigated with regard to its luminescent features. Steady-state spectroscopy
revealed how PTC can be formed by a photoreaction from 2,3,5-triphenyl-tetrazolium
chloride (TTC) and how the fluorescence quantum yield behaved in alcoholic solvents
with different viscosity. In the same array of solvents time correlated single photon counting
(TCSPC) measurements were performed and the fluorescence decay was investigated.
Global analysis of the results revealed different dynamics in the different solvents, but
although the main emission constant did change with the solvent, taking the fluorescence
quantum yield into consideration resulted in an independence of the radiative rate from
the solvent. The non-radiative rate, however, was highly solvent dependent and responsible
for the observed solvent-related changes in the fluorescence dynamics. Further studies
with the increased time resolution of femtosecond fluorescence upconversion revealed an
independence of the main emission constant from the excitation energy, however the dynamics
of the cooling processes prior to emission were prolonged for higher excitation
energy. This led to a conceivable photoreaction scheme with one emissive state with a
competing non-radiative relaxation channel, that may involve an intermediate state.
LG laser beams and their properties have seen a lot of scientific attention over the past two
decades. Also in the context of new techniques pushing the limit of technology further to
explore new phenomena, it is essential to understand the features of this beam class and
check the consistency of the findings with theoretical knowledge. The mode conversion
of a Hermite-Gaussian (HG) mode into a LG mode with the help of a spiral phase plate
(SPP) was investigated with respect to its space-time characteristics. It was found that
femtosecond LG and HG pulses of a given temporal duration share the same spectrum
and can be characterized using the same well-established methods. The mode conversion
proved to only produce the desired LG mode with its characteristic orbital angular momentum
(OAM), that is conserved after frequency doubling the pulse. Furthermore, it
was demonstrated that temporal shaping of the HG pulse does not alter the result of its
mode-conversion, as three completely different temporal pulse shapes produced the same
LG mode. Further attention was given to the sum frequency generation of fs LG beams
and dynamics of the interference of a HG and a LG pulse. It was found that if both are
chirped with inverse signs the spatial intensity distribution does rotate around the beam
axis on the time scale of the pulse. A strategy was found that would enable a measurement
of these dynamics by upconversion of the interference with a third gate pulse. The results
of which are discussed theoretically and an approach of an experimental realization had
been made. The simulated findings had only been reproduced to a limited extend due to
experimental limitations, especially the interferometric stability of the setup.
A complete simulation system is proposed that can be used as an educational tool by physicians in training basic skills of Minimally Invasive Vascular Interventions. In the first part, a surface model is developed to assemble arteries having a planar segmentation. It is based on Sweep Surfaces and can be extended to T- and Y-like bifurcations. A continuous force vector field is described, representing the interaction between the catheter and the surface. The computation time of the force field is almost unaffected when the resolution of the artery is increased.
The mechanical properties of arteries play an essential role in the study of the circulatory system dynamics, which has been becoming increasingly important in the treatment of cardiovascular diseases. In Virtual Reality Simulators, it is crucial to have a tissue model that responds in real time. In this work, the arteries are discretized by a two dimensional mesh and the nodes are connected by three kinds of linear springs. Three tissue layers (Intima, Media, Adventitia) are considered and, starting from the stretch-energy density, some of the elasticity tensor components are calculated. The physical model linearizes and homogenizes the material response, but it still contemplates the geometric nonlinearity. In general, if the arterial stretch varies by 1% or less, then the agreement between the linear and nonlinear models is trustworthy.
In the last part, the physical model of the wire proposed by Konings is improved. As a result, a simpler and more stable method is obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. The physical and the geometrical methods are merged, resulting in efficient relaxations. Tests show that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions and the hardware to assemble the simulator has a low cost.
Magnetic Particle Imaging (MPI) ist ein neuartiges tomographisches Bildgebungsverfahren,
welches in der Lage ist, dreidimensional die Verteilung von superparamagnetischen
Nanopartikeln zu detektieren. Aufgrund des direkten Nachweises
des Tracers ist MPI ein sehr schnelles und sensitives Verfahren [12] und benötigt für
eine Einordnung des Tracers (z.B. im Gewebe) eine weitere bildgebende Modalität
wie die Magnetresonanztomographie (MRI) oder die Computertomographie. Die
strukturelle Einordnung wird häufig mit dem Fusion-Imaging-Verfahren durchgeführt,
bei dem die Proben separat in den Geräten vermessen und die Datensätze
retrospektiv korreliert werden [75][76]. In einem ersten Experiment wurde bereits
ein Traveling-Wave-MPI-Scanner (TWMPI) [17] mit einem Niederfeld-MRI-Scanner
kombiniert und die ersten Hybridmessung durchgeführt [15]. Der technische Aufwand,
zwei separate Geräte aufzubauen sowie die Tatsache, dass ein MRI-Gerät
bei 30mT sehr lange benötigt, diente als Motivation für ein integriertes TWMPIMRI-
Hybridsystem, bei dem das dynamische lineare Gradientenarray (dLGA) eines
TWMPI-Scanners intrinsisch das B0-Feld für ein MRI-Gerät erzeugen sollte.
Das Ziel dieser Arbeit war es, die Grundlagen für einen integrierten TWMPI-MRIHybridscanner
zu schaffen. Die Geometrie des dLGAs sollte dabei nicht verändert
werden, damit TWMPI-Messungen weiterhin ohne Einschränkungen möglich sind.
Zusammenfassend werden hier noch mal die wichtigsten Schritte und Ergebnisse
dieser Arbeit aufgezeigt.
Zu Beginn dieser Arbeit wurde mittels Magnetfeldsimulationen nach einer geeigneten
Stromverteilung gesucht, um allein mit dem dLGA ein ausreichend homogenes
Magnetfeld erzeugen zu können. Die Ergebnisse der Simulationen zeigten,
dass bereits zwei unterschiedliche Ströme in 14 der 20 Einzelspulen des dLGAs
genügten, um ein Field of View (FOV) mit der Größe 36mm x 12mm mit ausreichender
Homogenität zu erreichen. Die Homogenität innerhalb des FOVs betrug
dabei 3000 ppm. Für die angestrebte Feldstärke von 235mT waren Stromstärken
von 129A und 124A nötig.
Die hohen Ströme des dLGAs erforderten die Entwicklung eines dafür angepassten
Verstärkers. Das ursprüngliche Konzept, welches auf einem linear angesteuerten
Leistungstransistors aufbaute, wurde in zahlreichen Schritten so weit verbessert,
dass die nötigen Stromstärken stabil an- und ausgeschaltet werden konnten.
Mithilfe eines Ganzkörper-MRIs konnte erstmals das B0-Feld des dLGAs, welches
durch den selbstgebauten Verstärker erzeugt wurde, gemessen und mit der Simulation
verglichen werden. Zwischen den beiden Verläufen zeigte sich eine qualitativ
gute Übereinstimmung.
Das Finden des NMR-Signals stellte wegen des selbstgebauten Verstärkers eine
Herausforderung dar, da zu diesem Zeitpunkt die nötige Präzision noch nicht erreicht
wurde und der wichtigste Parameter, die Magnetfeldstärke im dLGA, nicht
gemessen werden konnte. Dagegen konnte die Länge der Pulse für die Spin-Echo-
Sequenz sehr gut gemessen werden, jedoch war der optimale Wert noch nicht bekannt.
Durch iterative Messungen wurden die richtigen Einstellungen gefunden,
die nach Änderungen an der Hardware jeweils angepasst wurden.
Die Performanz des Verstärkers konnte anhand wiederholter Messungen des NMRSignals
genauer untersucht werden. Es zeigte sich, dass die Präzision weiter verbessert
werden musste, um reproduzierbare Ergebnisse zu erhalten. Mithilfe des
NMR-Signals konnten auch das B0-Feld ausgemessen werden. Es zeigte eine gute
Übereinstimmung zur Simulation. Mithilfe von vier Segmentspulen des dLGAs
war es möglich einen linearen Gradienten entlang der z-Achse zu erzeugen. Ein
Gradient wurde zusätzlich zum B0-Feld geschaltet und ebenfalls ausgemessen.
Auch dieser Verlauf zeigte eine gute Übereinstimmung zur Simulation.
Mithilfe des Gradienten wurde erfolgreich die Frequenzkodierung und die Phasenkodierung
implementiert, durch die bei beiden Messungen zwei Proben anhand
des Ortes unterschieden werden konnten. Damit war die Entwicklung des MRIScanners
abgeschlossen.
Der Aufbau des TWMPI-Scanners benötigte neben dem Bau des dLGAs die Anfertigung
von Sattelspulen. Für die MPI-Messungen konnte der fehlende Teil der
Sendekette sowie die gesamte Empfangskette von einer früheren Version benutzt
werden. Auch für das MPI wurde die Funktionalität mithilfe einer Punktprobe und
eines Phantoms überprüft, allerdings hier in zwei Dimensionen.
Die Erweiterung zu einem Hybridscanner erforderte weitere Modifikationen gegenüber
einem reinen TWMPI- bzw. MRI-Scanner. Es musste ein Weg gefunden
werden, die Beschaltung des dLGAs für die jeweilige Modalität zügig anzupassen.
Dafür wurde ein Steckbrett gebaut, das es erlaubt, die Verkabelung des dLGAs in
kurzer Zeit zu ändern. Außerdem mussten innerhalb des dLGAs die Sattelspulen
und die Empfangsspule des TWMPIs sowie die Empfangsspule des MRIs untergebracht
werden. Ein modulares System erlaubte die gleichzeitige Anordnung aller
Komponenten innerhalb des dLGAs. Das messbare FOV des MRIs ist der Homogenität
des B0-Feldes angepasst, das FOV des TWMPI ist ausgedehnter.
Zum Ende dieser Arbeit wurde erfolgreich eine Hybridmessung durchgeführt. Das
Phantom bestand aus je zwei Kugeln gefüllt mit Öl und mit einem MPI-Tracer
(Resovist). Mit TWMPI war die räumliche Abbildung der Resovistkugeln möglich,
während mit MRI die der Ölkugeln möglich war. Diese in situ Messung zeigte die
erfolgreiche Umsetzung des Konzeptes für den TWMPI-MRI-Hybridscanner.
Zusammenfassend wurden in dieser Arbeit die Grundlagen für einen TWMPIMRI-
Hybridscanner gelegt. Die größte Schwierigkeit bestand darin, ein ausreichend
homogenes B0-Feld für das MRI zu erzeugen, mit dem man ein gutes NMRSignal
aufnehmen konnte. Mit einer einfachen Stromverteilung, bestehend aus zwei
unterschiedlichen Strömen, konnte ein ausreichend homogenes B0-Feld erzeugt
werden. Durch komplexere Stromverteilungen lässt sich die Homogenität noch verbessern
und somit das FOV vergrößern.
Die MRI-Bildgebung wurde in dieser Arbeit für eine Dimension implementiert und
soll in fortführenden Arbeiten auf 2D und 3D ausgedehnt werden. Letztendlich
soll anhand eines MRI-Bildes die Partikelverteilung des MPI-Tracers in Lebewesen
deren Anatomie zugeordnet werden. In [76][77][78] sind die ersten präklinischen
Anwendungen mit dem TWMPI-Scanner durchgeführt worden. Diese Anwendungen
erlangen eine höhere Aussagekraft durch die zusätzlichen Informationen eines
TWMPI-MRI-Hybridscanners.
In weiteren Arbeiten sollte zusätzlich die Größe des FOVs für das MRI erweitert
werden. Außerdem macht es Sinn, einen elektronischen Schalter zum Umschalten
des dLGAs zwischen MRI und MPI zu realisieren.
Die nächste Version des Hybridscanners könnte beispielsweise ein komplett neu
gestaltetes dLGA enthalten, in dem jede Segmentspule in radialer Richtung einmal
geteilt wird und dadurch in eine innere und eine äußere Spule zerlegt wird. Für
das MRI werden die beiden Spulenteile gegen geschaltet, um ein homogenes Feld
in radialer Richtung zu erhalten. Für das TWMPI werden die Spulenteile gleichgeschaltet,
um einen möglichst starken Feldgradienten zu erreichen.
In dieser Arbeit wurde für die nächste Version eines TWMPI-MRI-Hybridscanners
viel Wissen generiert, das äußerst hilfreich für das neue Design sein wird. Anhand
der Vermessung des B0-Feldes hat sich gezeigt, dass die simulierten Magnetfelder
gut mit den gemessenen Magnetfeldern übereinstimmen. Außerdem wurde viel
gelernt über die Kombination von TWMPI mit MRI.
Coherent Multidimensional Spectroscopy in Molecular Beams and Liquids Using Incoherent Observables
(2018)
The aim of the present work was to implement an experimental approach that enables coherent two-dimensional (2D) electronic spectroscopy of samples in various states of matter. For samples in the liquid phase, a setup was realized that utilizes the sample fluorescence for the acquisition of 2D spectra. Whereas the liquid-phase approach has been established before, coherent 2D spectroscopy on gaseous samples in a molecular beam as developed in this work is in fact a new method. It employs for the first time cations in a time-of-flight mass spectrometer for signal detection and was used to obtain the first ion-selective 2D spectra of a molecular-beam sample. Additionally, a new acquisition concept was developed in this thesis that significantly decreases measurement times in 2D spectroscopy using optimized sparse sampling and a compressed-sensing reconstruction algorithm.
Characteristic for the variant of 2D spectroscopy presented in this work is the usage of a phase-coherent sequence of four laser pulses in a fully collinear geometry for sample excitation. The pulse sequence was generated by a custom-designed pulse shaper that is capable of rapid scanning by changing the pulse parameters such as time delays and phases with the repetition rate of the laser. The sample's response was detected by monitoring incoherent observables that arise from the final-state population, for instance fluorescence or cations. Phase cycling, i.e., signal acquisition with different combinations of the relative phases of the excitation pulses, was applied to extract nonlinear signal contributions from the full signal during data analysis.
Liquid-phase 2D fluorescence spectroscopy was established with the laser dye cresyl violet as a sample molecule, confirming coherent oscillations previously observed in literature that are originating from vibronic coherences in specific regions of the 2D spectrum.
The data set of this experiment was used subsequently to introduce optimized sparse sampling in 2D spectroscopy. An optimization algorithm was implemented in order to find the best sampling pattern while taking only one quarter of the regular time-domain sampling points, thereby reducing the acquisition time by a factor of four. Signal recovery was based on a new and compact representation of 2D spectra using the von Neumann basis, which required about six times less coefficients than the Fourier basis to retain the relevant information. Successful reconstruction was shown by recovering the coherent oscillations in cresyl violet from a reduced data set.
Finally, molecular-beam coherent 2D spectroscopy was introduced with an investigation of ionization pathways in highly-excited nitrogen dioxide, revealing transitions to discrete auto-ionizing states as the dominant contribution to the ion signal. Furthermore, the advantage of the time-of-flight approach to obtain reactant and product 2D spectra simultaneously enabled the observation of distinct differences in the multiphoton-ionization response functions of the nitrogen dioxide cation and the nitrogen oxide ionic fragment.
The developed experimental techniques of this work will facilitate fast acquisition of 2D spectra for samples in various states of matter and permit reliable direct comparison of results. Therefore, they pave the way to study the properties of quantum coherences during photophysical processes or photochemical reactions in different environments.