Refine
Has Fulltext
- yes (248)
Is part of the Bibliography
- yes (248)
Year of publication
Document Type
- Doctoral Thesis (140)
- Journal article (106)
- Master Thesis (1)
- Other (1)
Keywords
- Monte-Carlo-Simulation (12)
- Supersymmetrie (12)
- Topologischer Isolator (12)
- Blazar (10)
- topological insulators (10)
- Aktiver galaktischer Kern (8)
- Astrophysik (8)
- Elementarteilchenphysik (8)
- LHC (8)
- physics (8)
Institute
- Institut für Theoretische Physik und Astrophysik (248) (remove)
Sonstige beteiligte Institutionen
Das Ziel der vorliegenden Arbeit ist eine umfassende Analyse von Erzeugung und anschließenden Zerfällen von Neutralinos im Nichtminimalen Supersymmetrischen Standardmodell (NMSSM) speziell für den nächsten verfügbaren Elektron-Positron-Speicherring LEP2 am CERN mit einer voraussichtlichen Schwerpunktsenergie von 190 GeV. Das NMSSM ist die einfachste Erweiterung des Minimalen Supersymmetrischen Standardmodells MSSM mit einem Singlett-Superfeld, so dass der Higgs-Sektor insgesamt sieben physikalische Higgs-Teilchen enthält, und zwar drei neutrale skalare, zwei pseudoskalare und zwei geladene. Weiterhin enthält das NMSSM fünf Neutralinos gegenüber vier im MSSM. In dieser Arbeit präsentieren wir die 5 x 5 Neutralinomischungsmatrix, stellen die Eigenwertgleichung auf und analysieren das Massenspektrum und die Parameterabhängigkeit möglicher masseloser Zustände. Für die Untersuchung von Neutralinoproduktion und -zerfall wurden verschiedene Szenarien gewählt, in denen das leichteste Neutralino eine Masse von 10 GeV und eine Singlettkomponente von über 90% besitzt oder in denen das leichteste Neutralino bis zu 50 Gev schwer ist und sich der Singlettanteil auf die beiden leichtesten Neutralinos verteilt. Die Wirkungsquerschnitte für die Neutralinoproduktion wurden in den gewählten Szenarien für Schwerpunktsenergien von 100 GeV bis 600 GeV berechnet, also bis zu einem Bereich, den ein geplanter Elektron-Positron-Linearbeschleuniger erreichen kann. Typische Wirkungsquerschnitte für die direkte Produktion vorwiegend singlettartiger Neutralinos liegen im Bereich von 100 fb. Selbst wenn das leichteste Neutralino sehr leicht ist, kann das nächste bereits so schwer sein, dass bei LEP2 nur die nicht nachtweisbare Paarproduktion des leichtesten supersymmetrischen Teilchens möglich ist. Somit ist bei LEP2 keine Erhöhung der unteren Neutralinomassengrenzen im NMSSM zu erwarten, falls kein Neutralino gefunden wird. In Szenarien mit leichten singlettartigen Neutralinos können sehr oft auch sehr leichte Higgs-Bosonen mit Massen unterhalb der im MSSM vorhandenen Grenzen existieren. Somit kann in allen unseren Szenarien der Neutralinozerfall in ein skalares oder pseudoskalares Higgs-Boson möglich sein und dann Verweigungsverhältnisse bis zu fast 100% erreichen. Wir berechnen in dieser Arbeit für die bei LEP2 produzierbaren Neutralinos die Verwzeigungsverhältnisse für die Zweikörperzerfälle in Higgs-Bosonen, die Dreikörperzerfälle in zwei Fermionen und den Schleifenzerfall in ein Photon. In allen Fällen befindet sich im Endzustand außerdem das unsichtbare leichteste Neutralino, dass sich experimentell als fehlende Energie niederschlägt. Zur Bestimmung der Signaturen betrachten wir außerdem die anschließenden Zerfallsmodi der leichten Higgs-Bosonen. Der Nachweis von leichten singlettartigen Neutralinos im NMSSM kann einerseits unmöglich sein, wenn entweder die schweren Neutralinos bei der verfügbaren Schwerpunktsenergie nicht produziert werden können oder über Higgs-Bosonen vollkommen in das LSP zerfallen, andererseits aber auch durch klare Signaturen mit einem Photon oder mit Jets im Endzustand erleichtert werden. Bei LEP2 sollten also durchaus Chancen bestehen, auch im Rahmen des NMSSM ein Neutralino zu entdecken. Zumindest werden sich weitere Einschränkungen des Parameterraums ergeben. Der Dissertation ist ein Anhang beigefügt, der eine vollständige Liste aller Feynman-Regeln des NMSSM enthält, die sich von denjenigen des MSSM unterscheiden.
Ein Teil der interstellaren Materie (ISM) liegt in Form von winzigen Festkörpern vor, die mit dem interstellaren Gas vermischt sind. Diese Teilchen werden als interstellarer Staub bezeichnet. Obwohl der Staubanteil an der Gesamtmasse der ISM nur etwa 1% beträgt, kann sein Einfluß auf das interstellare Strahlungsfeld und die Dynamik des Gases nicht vernachlässigt werden. So ist er die Hauptursache für Extinktion, Streuung und Polarisation von Licht. Außerdem stellt der Staub ein wichtiges Kühlmittel für das interstellare Medium dar und beeinflußt die chemischen Prozesse innerhalb der ISM. Staubpartikel unterliegen Wachstums- und Zerstörungsprozessen. So können sie Moleküle aus der Umgebung an ihrer Oberfläche anlagern (Akkretion) oder sich mit anderen Partikeln zu größeren Staubteilchen verbinden (Koagulation). Durch die Wechselwirkung mit Ionen kann Oberflächenmaterial abgetragen werden (Sputtering) und das Kollidieren von Staubpartikeln führt zu deren Zerschlagung in kleinere Teilchen oder (Shattering) deren Vaporisation. Außerdem sind Staubpartikel an das Gas gekoppelt und werden von diesem mitgerissen. Der Schwerpunkt der Vorliegenden Arbeit war die Untersuchung der dynamischen Prozesse, denen Staubpartikel bei der Durchquerung von interstellaren Stoßfronten unterworfen sind. In diesem Zusammenhang spielen vorallem die destruktiven Prozesse und die Kopplung an das Gas eine wichtige Rolle. Es wurden Gleichungen eingeführt, die die Änderung einer Staubverteilung durch diese Vorgänge beschreiben. Im Gegensatz zu bisherigen Modellen werden die Staubteilchen darin nicht allein durch ihre Masse, sondern auch durch ihre Geschwindigkeit charakterisiert. Auf diese Weise kann die Impulserhaltung bei einer Partikelkollision gewährleistet werden und es ist beispielsweise möglich auch Stöße gleich schwerer Partikel zu beschreiben. Die Gleichungen der Staub- und Hydrodynamik wurden für den Fall von stationären, eindimensionalen Stoßwellen numerisch gelöst, wobei die Wechselwirkungen zwischen Gas und Staub berücksichtigt wurden. Mit Hilfe des Modells wurden die Wirkung verschieden starker Stoßwellen auf eine Staubverteilung untersucht. Dabei wurden verschiedene Staubmaterialien zugrunde gelegt.
Die vorliegende Arbeit beleuchtet verschiedene Aspekte des Ladungstransports in Heterokontakten aus Normal- (N) und Supraleitern (S) im Rahmen des Bogoliubov-de Gennes-Formalismus. Dabei ist der bestimmende Prozeß die Andreev-Streuung: die Streuung von Elektronen in Löcher, bzw. umgekehrt, an räumlichen Variationen des supraleitenden Paarpotentials unter Erzeugung, bzw. Vernichtung, eines Cooperpaares und damit der Induktion eines Suprastroms. Befindet sich ein Supraleiter zwischen zwei normalleitenden Bereichen, so wandelt sich der an der einen NS-Phasengrenze durch Andreev-Streuung induzierte Suprastrom an der anderen NS-Phasengrenze wieder in einen durch Quasiteilchen getragenen Strom um. Diese Umwandlung erfolgt durch den Einfall eines Quasiteilchens, dessen Charakter dem des auf der gegenüberliegenden Seite des Supraleiters einfallenden Quasiteilchens entgegengerichtet ist, wie anhand von Wellenpaket-Rechnungen explizit gezeigt wird. Ersetzt man den Supraleiter durch einen mesoskopischen SNS-Kontakt, ist die Vielteilchen-Konfiguration in der mittleren N-Schicht phasenkohärent und daher verschieden von den unkorrelierten Quasiteilchen-Anregungen, die die verschobene Fermi-Kugel in den normalleitenden Zuleitungen bilden. Die Josephson-Ströme, die durch die Quasiteilchen in der mittleren N-Schicht getragen werden, werden unter zwei verschiedenen Modellannahmen berechnet: Im einen Fall werden nur Streuzustände als Startzustände betrachtet, im anderen, bei gleichzeitiger Berücksichtigung eines normalstreuenden Potentials, nur gebundene Zustände. Der SNS-Kontakt wird durch eine supraleitend/halbleitende Heterostruktur modelliert, deren Parameter-Werte sich an den Experimenten der Gruppe von Herbert Kroemer in Santa Barbara orientieren. Wenn die supraleitenden Bereiche ohne normalleitende Zuleitungen direkt mit einem Reservoir von Cooperpaaren verbunden sind, fallen nur Quasiteilchen in Streuzuständen aus den supraleitenden Bänken auf die NS-Phasengrenzen des Kontaktes ein. Mit den Normalleiter-Wellenfunktionen, die sich bei Anlegen einer Spannung V aus diesen Startzuständen entwickeln, wird die Josephson-Wechselstromdichte in der Mitte der N-Schicht bei der Temperatur T = 2,2 K berechnet. Die Stromdichte weist spannungsabhängige Oszillationen in der Zeit auf, deren Periode das Inverse der Josephson-Frequenz ist. Alle Stromdichten zeigen bei kleinen Spannungen einen steilen Anstieg ihres Betrages, der durch Quasiteilchen zustandekommt, die durch das elektrische Feld aus dem Kondensat kommend in den Paarpotentialtopf hineingezogen werden und dort bei kleinen Spannungen eine große Zahl von Andreev-Streuungen erfahren, wobei sie bei jedem Elektron-Loch-Zyklus die Ladung 2e durch die N-Schicht transportieren. Im zweiten betrachteten Fall wird unter Berücksichtigung von Normalstreuung der Gesamtzustand des Systems zu jedem Zeitpunkt durch eine Superposition von gebundenen Zuständen ausgedrückt. Die Energie dieser gebundenen Zustände ist abhängig von der Phasendifferenz Phi zwischen den supraleitenden Schichten. Für Werte der Phasendifferenz von ganzzahligen Vielfachen von Pi sind Zustände entgegengerichteter Impulse paarweise entartet. Das normalstreuende Potential mischt diese Zustände, hebt ihre Entartung auf und führt zu Energielücken: Es bilden sich Energiebänder im Phi-Raum, die formal den Bloch-Bändern von Kristallen im Wellenzahlraum entsprechen. Wird eine äußere Spannung angelegt, so ändert sich die Phasendifferenz gemäß der Josephson-Gleichung mit der Zeit und die Quasiteilchen oszillieren in ihren jeweiligen Phi-Bloch-Bändern: Diese Josephson-Bloch-Oszillationen ergeben den "normalen" Josephson-Wechselstrom, der zwischen positiven und negativen Werten schwingt und im zeitlichen Mittel Null ist. Zusätzlich können die Quasiteilchen durch Zener-Tunneln --- wie der analoge Prozeß in der Halbleiterphysik genannt wird --- in höhere Bänder übergehen. Während sich die Richtung der Josephson-Stromdichte zu den Zeiten minimaler Energielücke umkehrt, hat die Zener-Tunnel-Stromdichte nach einem Tunnel-Prozeß das gleiche Vorzeichen, das die Josephson-Stromdichte vor dem Tunnel-Prozeß hatte. Wenn die angelegte Spannung hinreichend groß ist und genügend Quasiteilchen in das höhere Band tunneln, überkompensiert die Zener-Tunnel-Stromdichte in der Halbperiode nach dem Tunnel-Prozeß die Josephson-Stromdichte, und die Gesamtstromdichte schwingt wieder in dieselbe Richtung wie vor dem Zener-Tunneln. Somit hat sich gewissermaßen die Periode halbiert: Die Gesamtstromdichte schwingt mit der doppelten Josephson-Frequenz. Allen untersuchten Aspekten des Ladungstransports durch Heterokontakte aus Normal- und Supraleitern ist eines gemein: Der für ihr Verständnis fundamentale Prozeß ist die Andreev-Streuung.
Im Rahmen dieser Arbeit wurden Effekte der Paarbildung durch Wechselwirkung von hochenergetischer Gammastrahlung mit dem metagalaktischen FIR-UV Strahlungsfeld (MRF) untersucht. Einerseits hat die Paarbildung Folgen f"ur die beobachteten Spektren aktiver Galaxienkerne, andererseits hat sie auch einen gro"sen Einflu"s auf den extragalaktischen Gammastrahlungshintergrund. Es wurde ein verbesserte Version f"ur das Modell des FIR-UV Strahlungsfelds vorgestellt, mit dessen Hilfe aus beobachteten Daten intrinsische Blazarspektren ermittelt wurden. Im weiteren wurde ein auf EGRET-Blazaren basierendes Modell f"ur den Gammastrahlungshintergrund berechnet, in dem besonderer Wert auf die korrekte Beschreibung der Absorption prim"arer und der daraus resultierenden sekund"aren Gammastrahlung gelegt wurde. Schlie"slich wurde gezeigt, da"s der Beitrag von BL Lac Objekten zum Gammahintergrund nicht nur der fehlende Flu"s, sondern auch die spektrale Form der aus EGRET Beobachtungen gewonnenen Daten erkl"art werden kann, ohne den gegenw"artigen TeV-Daten zu widersprechen.
This thesis aims at a description of the equilibrium dynamics of quantum spin glass systems. To this end a generic fermionic SU(2), spin 1/2 spin glass model with infinite-range interactions is defined in the first part. The model is treated in the framework of imaginary-time Grassmann field theory along with the replica formalism. A dynamical two-step decoupling procedure, which retains the full time dependence of the (replica-symmetric) saddle point, is presented. As a main result, a set of highly coupled self-consistency equations for the spin-spin correlations can be formulated. Beyond the so-called spin-static approximation two complementary systematic approximation schemes are developed in order to render the occurring integration problem feasible. One of these methods restricts the quantum-spin dynamics to a manageable number of bosonic Matsubara frequencies. A sequence of improved approximants to some quantity can be obtained by gradually extending the set of employed discrete frequencies. Extrapolation of such a sequence yields an estimate of the full dynamical solution. The other method is based on a perturbative expansion of the self-consistency equations in terms of the dynamical correlations. In the second part these techniques are applied to the isotropic Heisenberg spin glass both on the Fock space (HSGF) and, exploiting the Popov-Fedotov trick, on the spin space (HSGS). The critical temperatures of the paramagnet to spin glass phase transitions are determined accurately. Compared to the spin-static results, the dynamics causes slight increases of T_c by about 3% and 2%, respectively. For the HSGS the specific heat C(T) is investigated in the paramagnetic phase and, by way of a perturbative method, below but close to T_c. The exact C(T)-curve is shown to exhibit a pronounced non-analyticity at T_c and, contradictory to recent reports by other authors, there is no indication of maximum above T_c. In the last part of this thesis the spin glass model is augmented with a nearest-neighbor hopping term on an infinite-dimensional cubic lattice. An extended self-consistency structure can be derived by combining the decoupling procedure with the dynamical CPA method. For the itinerant Ising spin glass numerous solutions within the spin-static approximation are presented both at finite and zero temperature. Systematic dynamical corrections to the spin-static phase diagram in the plane of temperature and hopping strength are calculated, and the location of the quantum critical point is determined.
In this thesis, a phenomenological phase-fluctuation model for the pseudogap regime of the underdoped cuprates was discussed. The key idea of the phase-fluctuation scenario in the high-T_c superconductors is the notion that the pseudogap observed in a wide variety of experiments arises from phase fluctuations of the superconducting gap. In this scenario, below a mean-field temperature scale T_c^{MF}, a d_{x^2-y^2}-wave gap amplitude is assumed to develop. However, the superconducting transition is suppressed to a considerably lower transition temperature T_c by phase fluctuations. In the intermediate temperature regime between T_c^{MF} and T_c, phase fluctuations of the superconducting order parameter give rise to the pseudogap phenomena. The phenomenological phase-fluctuation model discussed in this thesis consists of a two-dimensional BCS-like Hamiltonian where the phase of the pairing-amplitude is free to fluctuate. The fluctuations of the phase were treated by a Monte Carlo simulation of a classical XY model. First, the density of states was calculated. The quasiparticle tunneling conductance (dI/dV) obtained from our phenomenological phase fluctuation model was able to reproduce characteristic and salient features of recent scanning-tunneling studies of Bi2212 and Bi2201 suggesting that the pseudogap behavior observed in these experiments arises from phase fluctuations of the d_{x^2-y^2}-wave pairing gap. In calculating the single-particle spectral weight, we were further able to show how phase fluctuations influence the experimentally observed quasiparticle spectra in detail. In particular the disappearance of the BCS-Bogoliubov quasiparticle band at T_c and the change from a more V-like superconducting gap to a rather U-like pseudogap above T_c can be explained in a consistent way by assuming that the low-energy pseudogap in the underdoped cuprates is due to phase fluctuations of a local d_{x^2-y^2}-wave pairing gap with fixed magnitude. Furthermore, phase fluctuations can explain why the pseudogap starts closing from the nodal points, whereas it rather fills in along the anti-nodal directions and they can also account for the characteristic temperature dependence of the superconducting (pi,0)-photoemission-peak. Next, we have shown that the "violation" of the low-frequency optical sum rule recently observed in the SC state of underdoped Bi2212, which is associated with a reduction of kinetic energy, can be related to the role of phase fluctuations. The decrease in kinetic energy is due to the sharpening of the quasiparticle peaks close to the superconducting transition at T_c == T_{KT}, where the phase correlation length xi diverges. A detailed analysis of the temperature and frequency dependence of the optical conductivity sigma(omega)=sigma_1(omega)+i sigma_2(omega) revealed a superconducting scaling of sigma_2(omega), which starts already above T_c, exactly as observed in high-frequency microwave conductivity experiments on Bi2212. On the other hand, our model was only able to account for the characteristic peak, which is observed in sigma_1(omega) close to the superconducting transition, after the inclusion of an additional marginal-Fermi-liquid scattering-rate in the optical conductivity formula. Finally, we calculated the static uniform diamagnetic susceptibility. It turned out that the precursor effects of the fluctuating diamagnetism above T_c are very small and limited to temperatures close to T_c in a phase-fluctuation scenario of the pseudogap. Instead, the temperature dependence of the uniform static magnetic susceptibility is dominated by the Pauli spin susceptibility, which displayed a very characteristic temperature dependence, independent of the details of the gap function used in our model. This temperature dependence is qualitatively very similar to the experimentally observed change of the Knight-shift as a function of temperature in underdoped Bi2212.
In this work, we studied in great detail how the unknown parameters of the SUSY seesaw model can be determined from measurements of observables at or below collider energies, namely rare flavor violating decays of leptons, slepton pair production processes at linear colliders and slepton mass differences. This is a challenging task as there is an intricate dependence of the observables on the unknown seesaw, light neutrino and mSUGRA parameters. In order to separate these different influences, we first considered two classes of seesaw models, namely quasi-degenerate and strongly hierarchical right-handed neutrinos. As a generalisation, we presented a method that can be used to reconstruct the high energy seesaw parameters, among them the heavy right-handed neutrino masses, from low energy observables alone.
In this thesis we analyze CP violating effects of MSSM phases in production and two-body decays of neutralinos, charginos and sfermions. For different supersymmetric processes we define and calculate CP-odd asymmetries, which base on triple products. We present numerical results for electron-positron collisions at a future linear collider with a center of mass energy of 500-800 GeV, high luminosity and longitudinally polarized beams.
In this work the supersymmetric seesaw model and its effects on low-energy leptonic observables and thermal leptogenesis have been systematically investigated. Precision measurements will increase the sensitivity on lepton-flavor violating decays, particularly on Br(l_j->l_i gamma) and also on electric and magnetic dipole moments in the near future. In order to improve also the accuracy of theoretical predictions for these processes, we have performed a full one-loop calculation of the underlying supersymmetric processes taking into account the lepton masses. Since the mechanism of soft supersymmetry breaking (SSB) is completely unknown, a novel analysis beyond the often studied minimal Supergravity scenarios has been performed. This way it has been demonstrated that in the considered mSUGRA, AMSB, GMSB and gaugino mediated scenarios, the ongoing search for Br(mu->e gamma) can constrain fundamental SSB parameters and/or the seesaw parameters. On the other hand, the basic parameters of thermal leptogenesis, such as the CP asymmetry in the decays of the lightest right-handed Majorana neutrino, provide probes of the unknown complex orthogonal R-matrix of the seesaw model.
We investigate the single particle static and dynamic properties at zero temperature within the Hubbard an three-band-Hubbard model for the superconducting copper oxides. Based on the recently proposed self-energy functional approach (SFA) [M.Potthoff, Eur. Phys. J. B 32 429 (2003)], we present an extension of the cluster-perturbation theory (CPT) to systems with spontaneous broken symmetry. Our method accounts for both short-range correlations and long-range order. Short-range correlations are accurately taken into account via the exact diagonalization of finite clusters. Long-range order is described by variational optimization of a ficticious symmetry-breaking field. In comparison with related cluster methods, our approach is more flexible and, for a given cluster size, less demanding numerically, especially at zero temperature. An application of the method to the antiferromagnetic phase of the Hubbard model at half-filling shows good agreement with results from quantum Monte-Carlo calculations. We demonstrate that the variational extension of the cluster-perturbation theory is crucial to reproduce salient features of the single-particle spectrum of the insulating cuprates. Comparison of the dispersion of the low-energy excitations with recent experimental results of angular resolved photoemission spectroscopy (ARPES) allows us to fix a consistent parameter set for the one-band Hubbard model with an additional hopping parameter t' along the lattice diagonal. The doping dependence of the single-particle excitations is studied within the t-t-U Hubbard model with special emphasis on the electron doped compounds. We show, that the ARPES results on the band structure and the Fermi surface of Nd{2-x}Ce_xCuOCl_{4-\delta} are naturally obtained within the t-t-U Hubbard model without further need for readjustment or fitting of parameters, as proposed in recent theoretical considerations. We present a theory for the photon energy and polarization dependence of ARPES intensities from the CuO2 plane in the framework of strong correlation models. The importance of surface states for the observed experimental facts is considered. We show that for electric field vector in the CuO_2 plane the ‘radiation characteristics’ of the O 2p_{\sigma} and Cu 3d_{x^2-y^2} orbitals are strongly peaked along the CuO_2 plane, i.e. most photoelectrons are emitted at grazing angles. This suggests that surface states play an important role in the observed ARPES spectra, consistent with recent data from Sr_2CuCl_2O_2. We show that a combination of surface state dispersion and Fano resonance between surface state and the continuum of LEED-states may produce a precipitous drop in the observed photoelectron current as a function of in-plane momentum, which may well mimic a Fermi-surface crossing. This effect may explain the simultaneous ‘observation’ of a hole-like and an electron-like Fermi surfaces in Bi_2Sr_2CaCu_2O_{8+\delta} at different photon energies.
In this PhD thesis, we develop models for the numerical simulation of epitaxial crystal growth, as realized, e.g., in molecular beam epitaxy (MBE). The basic idea is to use a discrete lattice gas representation of the crystal structure, and to apply kinetic Monte Carlo (KMC) simulations for the description of the growth dynamics. The main advantage of the KMC approach is the possibility to account for atomistic details and at the same time cover MBE relevant time scales in the simulation. In chapter 1, we describe the principles of MBE, pointing out relevant physical processes and the influence of experimental control parameters. We discuss various methods used in the theoretical description of epitaxial growth. Subsequently, the underlying concepts of the KMC method and the lattice gas approach are presented. Important aspects concerning the design of a lattice gas model are considered, e.g. the solid-on-solid approximation or the choice of an appropriate lattice topology. A key element of any KMC simulation is the selection of allowed events and the evaluation of Arrhenius rates for thermally activated processes. We discuss simplifying schemes that are used to approximate the corresponding energy barriers if detailed knowledge about the barriers is not available. Finally, the efficient implementation of the MC kinetics using a rejection-free algorithm is described. In chapter 2, we present a solid-on-solid lattice gas model which aims at the description of II-VI(001) semiconductor surfaces like CdTe(001). The model accounts for the zincblende structure and the relevant surface reconstructions of Cd- and Te-terminated surfaces. Particles at the surface interact via anisotropic nearest and next nearest neighbor interactions, whereas interactions in the bulk are isotropic. The anisotropic surface interactions reflect known properties of CdTe(001) like the small energy difference between the c(2x2) and (2x1) vacancy structures of Cd-terminated surfaces. A key element of the model is the presence of additional Te atoms in a weakly bound Te* state, which is motivated by experimental observations of Te coverages exceeding one monolayer at low temperatures and high Te fluxes. The true mechanism of binding excess Te to the surface is still unclear. Here, we use a mean-field approach assuming a Te* reservoir with limited occupation. In chapter 3, we perform KMC simulations of atomic layer epitaxy (ALE) of CdTe(001). We study the self-regulation of the ALE growth rate and demonstrate how the interplay of the Te* reservoir occupation with the surface kinetics results in two different regimes: at high temperatures the growth rate is limited to one half layer of CdTe per ALE cycle, whereas at low enough temperatures each cycle adds a complete layer. The temperature where the transition between the two regimes occurs depends mainly on the particle fluxes. The temperature dependence of the growth rate and the flux dependence of the transition temperature are in good qualitative agreement with experimental results. Comparing the macroscopic activation energy for Te* desorption in our model with experimental values we find semiquantitative agreement. In chapter 4, we study the formation of nanostructures with alternating stripes during submonolayer heteroepitaxy of two different adsorbate species on a given substrate. We evaluate the influence of two mechanisms: kinetic segregation due to chemically induced diffusion barriers, and strain relaxation by alternating arrangement of the adsorbate species. KMC simulations of a simple cubic lattice gas with weak inter-species binding energy show that kinetic effects are sufficient to account for stripe formation during growth. The dependence of the stripe width on control parameters is investigated. We find an Arrhenius temperature dependence, in agreement with experimental investigations of phase separation in binary or ternary material systems. Canonical MC simulations show that the observed stripes are not stable under equilibrium conditions: the adsorbate species separate into very large domains. Off-lattice simulations which account for the lattice misfit of the involved particle species show that, under equilibrium conditions, the competition between binding and strain energy results in regular stripe patterns with a well-defined width depending on both misfit and binding energies. In KMC simulations, the stripe-formation and the experimentally reported ramification of adsorbate islands are reproduced. To clarify the origin of the island ramification, we investigate an enhanced lattice gas model whose parameters are fitted to match characteristic off-lattice diffusion barriers. The simulation results show that a satisfactory explanation of experimental observations within the lattice gas framework requires a detailed incorporation of long-range elastic interactions. In the appendix we discuss supplementary topics related to the lattice gas simulations in chapter 4.
The four-dimensional Minkowski space is known to be a good description for space-time down to the length scales probed by the latest high-energy experiments. Nevertheless, there is the viable and exciting possibility that additional space-time structure will be observable in the next generation of collider experiments. Hence, we discuss different extensions of the standard model of particle physics with an extra dimension at the TeV-scale. We assume that some of the gauge and Higgs bosons propagate in one additional spatial dimension, while matter fields are confined to a four-dimensional subspace, the usual Minkowski space. After compactification on an S^1/Z_2 orbifold, an effective four-dimensional theory is obtained where towers of Kaluza-Klein (KK) modes, in addition to the standard model fields, reflect the higher-dimensional structure of space-time. The models are elaborated from the 5D Lagrangian to the Feynman rules of the KK modes. Special attention is paid to an appropriate generalization of the Rxi-gauge and the interplay between spontaneous symmetry breaking and compactification. Confronting the observables in 5D standard model extensions with combined precision measurements at the Z-boson pole and the latest data from LEP2, we constrain the possible size R of the extra dimension experimentally. A multi-parameter fit of all relevant input parameters leads to bounds for the compactification scale M=1/R in the range 4-6 TeV at the 2 sigma confidence level and shows how the mass of the Higgs boson is correlated with the size of an extra dimension. Considering a future linear e+e- collider, we outline the discovery potential for an extra dimension using the proposed TESLA specifications as an example. As a consistency check for the various models, we analyze Ward identities and the gauge boson equivalence theorem in W-pair production and find that gauge symmetry is preserved by a complex interplay of the Kaluza-Klein modes. In this context, we point out the close analogy between the traditional Higgs mechanism and mass generation for gauge bosons via compactification. Beyond the tree-level, the higher-dimensional models studied extensively in the literature and in the first part of this thesis have to be extended. We modify the models by the inclusion of brane kinetic terms which are required as counter terms. Again, we derive the corresponding 4D theory for the KK towers paying special attention to gauge fixing and spontaneous symmetry breaking. Finally, the phenomenological implications of the new brane kinetic terms are investigated in detail.
The Galactic Starburst Region NGC 3603 : exciting new insights on the formation of high mass stars
(2004)
One of the most fundamental, yet still unsolved problems in star formation research is addressed by the question "How do high mass stars form?". While most details related to the formation and early evolution of low mass stars are quite well understood today, the basic processes leading to the formation of high mass stars still remain a mystery. There is no doubt that low mass stars like our Sun form via accretion of gas and dust from their natal environment. With respect to the formation of high mass stars theorists currently discuss two possible scenarios controversely: First, similar to stars of lower masses, high mass stars form by continuous (time variable) accretion of large amounts of gas and dust through their circumstellar envelopes and/or disks. Second, high mass stars form by repeated collisions (coalescence) of protostars of lower masses. Both scenarios bear difficulties which impose strong constrains on the final mass of the young star. To find evidences for or against one of these two theoretical models is a challenging task for observers. First, sites of high mass star formation are much more distant than the nearby sites of low mass star formation. Second, high mass stars form and evolve much faster than low mass star. In particular, they contract to main sequence, hydrogen burning temperatures and densities on time scales which are much shorter than typical accretion time scales. Third, as a consequence of the previous point, young high mass stars are usually deeply embedded in their natal environment throughout their (short) pre-main sequence phase. Therefore, high mass protostars are rare, difficult to find and difficult to study. In my thesis I undertake a novel approach to search for and to characterize high mass protostars, by looking into a region where young high mass stars form in the violent neighbourhood of a cluster of early type main sequence stars. The presence of already evolved O type stars provides a wealth of energetic photons and powerful stellar winds which evaporate and disperse the surrounding interstellar medium, thus "lifting the courtains" around nearby young stars at a relatively early evolutionary stage. Such premises are given in the Galactic starburst region NGC 3603. Nevertheless, a large observational effort with different telescopes and instruments -- in particular, taking advantage of the high angular resolution and high sensitivity of near and mid IR instruments available at ESO -- was necessary to achieve the goals of my study. After a basic introduction on the topic of (high mass) star formation in Chapter 1, a short overview of the investigated region NGC 3603 and its importance for both galactic and extragalactic star formation studies is given in Chapter 2. Then, in Chapter 3, I report on a comprehensive investigation of the distribution and kinematics of the molecular gas and dust associated with the NGC 3603 region. In Chapter 4 I thoroughly address the radial extent of the NGC 3603 OB cluster and the spatial distribution of the cluster members. Together with deep Ks band imaging data, a detailed survey of NGC 3603 at mid IR wavelengths allows to search the neighbourhood of the cold molecular gas and dust for sources with intrinsic mid IR excess (Chapter 5). In Chapter 6 I characterize the most prominent sources of NGC 3603 IRS 9 and show that these sources are bona-fide candidates for high mass protostars. Finally, a concise summary as well as an outlook on future prospects in high mass star formation research is given in Chapter 7.
Diese Arbeit wurde durch Experimente zur Potential- und Stromverteilung in Quanten-Hall- Systemen motiviert, die in den letzten Jahren in der Abteilung von Klitzing am MPI für Festkörperforschung durchgeführt wurden und ergaben, dass elektrostatische Abschirmungseffekte in zweidimensionalen Elektronensystemen (2DES), die den ganzzahligen Quanten-Hall-Effekt (QHE) zeigen, sehr wichtig für das Verständnis der Stromverteilung innerhalb der Probe und der extremen Genauigkeit der gemessenen quantisierten Werte des Hall-Widerstands sind. Daraus ergab sich für die hier vorgelegte Arbeit das folgende Programm. Zunächst wird, nach einem einleitenden Kapitel, in Kapitel 2 der Formalismus vorgestellt, mit dem in den späteren Kapiteln Elektronendichten und elektrostatische Potentiale, die z.B. das 2DES auf eine Probe mit Streifengeometrie eingrenzen, selbstkonsistent berechnet werden. Diese Selbstkonsistenz besteht aus zwei Teilen. Erstens wird, bei vorgegebenem Potential, die Elektronendichte berechnet. Zweitens wird aus vorgegebener Ladungsverteilung, bestehend aus (positiven) Hintergrundladungen und der (im ersten Schritt berechneten) Elektronenladungsdichte, und geeigneten Randbedingungen (konstantes Potential auf metallischen Gates) durch Lösen der Poisson-Gleichung das elektrostatische Potential berechnet. Wenn wir im ersten Schritt, unter Berücksichtigung der Fermi-Dirac-Statistik, die Elektronendichte quantenmechanisch aus den Energieeigenfunktionen und -werten berechnen, erhalten wir die Hartree-Näherung, die die Dichte als nichtlokales Funktional des Potentials liefert. Wenn man die Ausdehnung der Wellenfunktionen auf der Längenskala, auf der sich das Potential typischerweise ändert, vernachlässigen kann, so vereinfacht sich die Hartree-Näherung zur Thomas- Fermi-Näherung, die einen lokalen Zusammenhang zwischen Elektronendichte und Potential beschreibt. Die meisten der konkreten Rechnungen wurden im Rahmen dieser selbstkonsistenten Thomas-Fermi-Poisson-Näherung durchgeführt. Im Kapitel 3 wird allgemein das Abschirmverhalten eines 2DES im hohen Magnetfeld untersucht. Wir betrachten die Antwort auf eine harmonische Potentialmodulation im unbegrenzten 2DES und in streifenförmig begrenzten Systemen mit zwei unterschiedlichen Arten von Randbedingungen. Bei tiefen Temperaturen und hohen Magnetfeldern finden wir extrem nichtlineare Abschirmung. Im unbegrenzten 2DES charakterisieren wir die Abschirmung, indem wir die gesamte Variation des selbstkonsistent berechneten Potentials als Funktion der Amplitude des aufgeprägten cosinus-Potentials berechnen. Bei festem Magnetfeld ergeben sich so Stufenfunktionen, deren Gestalt stark vom Füllfaktor der Landau-Niveaus im homogenen Zustand ohne aufgeprägtes Potential abhängt (siehe Abbildungen 3.2- 3.6). Vielleicht noch unerwartetere Kurven ergeben sich, wenn man bei festem Modulationspotential die Varianz des selbstkonsistenten Potentials gegen das Magnetfeld B aufträgt (Abb. 3.9). Die Resultate lassen sich aber leicht verstehen und (bei Temperatur T = 0) in einem einfachen Schema (Abb. 3.7) zusammenfassen. Als ordnendes Prinzip stellt sich heraus, dass sich stets Zustände einstellen, in denen die Elektronendichte möglichst wenig von der bei verschwindendem Magnetfeld abweicht. Wenn die Zyklotronenergie groß gegen die thermische Energie kBT ist, erfordert das, dass in den großen Bereichen, in denen die Dichte variiert, ein Landau-Niveau unmittelbar an dem, im Gleichgewicht konstanten, elektrochemischen Potential liegen muss (En, “pinning”). Man nennt diese Bereiche kompressibel. In den kompressiblen Bereichen können Elektronen leicht umverteilt werden, d.h. die Dichte ist leicht veränderbar und in diesen Bereichen gibt es extrem effektive Abschirmung. Existieren kompressible Bereiche mit unterschiedlichen Landau-Niveaus (En) am elektrochemischen Potential, z.B. bei großer Modulation oder weil die Dichte zum Probenrand hin abnimmt, so gibt es zwischen benachbarten kompressiblen Bereichen mit unterschiedlichen Landau-Quantenzahlen n “inkompressible” Bereiche, in denen zwischen zwei Landau-Niveaus liegt. Dort sind alle Landau-Niveaus unterhalb von besetzt, die oberhalb leer. Folglich ist dort der Füllfaktor ganzzahlig und die Dichte konstant. Das Wechselspiel zwischen kompressiblen und inkompressiblen Bereichen bestimmt das Abschirmverhalten. Randeffekte erweisen sich nur in solchen Magnetfeldintervallen als wichtig für die Abschirmung im Inneren einer streifenförmigen Probe, in denen (schon ohne aufgeprägte Modulation) in der Probenmitte ein neuer inkompressibler Streifen entsteht. Im Kapitel 4 wird die Rolle der inkompressiblen Streifen in einer idealisierten, streifenförmigen Hall-Probe untersucht. Mithilfe einer lokalen Version des Ohmschen Gesetzes berechnen wir bei vorgegebenen Gesamtstrom die Stromdichte und das nun ortsabhängige elektrochemische Potential, dessen Gradient die Stromdichte treibt. Für den lokalen Leitfähigkeitstensor nehmen wir ein für homogenes 2DES berechnetes Resultat und ersetzen den Füllfaktor jeweils durch den lokalen Wert. Dadurch ergibt sich, dass bei Existenz inkompressibler Streifen der gesamte Strom auf diese Streifen eingeschränkt ist, in denen die Komponenten des spezifischen Widerstands die Werte des freien, idealen 2DES haben, also verschwindenden longitudinalen und quantisierten Hall-Widerstand. Aus Hartree-Rechnungen zeigen wir, dass es inkompressible Streifen nur in Magnetfeldintervallen endlicher Breite (um ganzzahlige Füllfaktoren) gibt und dass in der Nähe von Füllfaktor 4 es nur inkompressible Streifen mit dem lokalen Füll-faktor \nu(x) = 4 gibt, aber nicht solche mit \nu(x) = 2, in Gegensatz zu dem Ergebnis der Thomas-Fermi-Poisson-Näherung, die hier nicht gültig ist. Um diese Unzulänglichkeit der Thomas-Fermi-Poisson-Näherung und Artefakte des strikt lokalen Modells zu beheben, führen wir die Rechnungen mit einem (auf der Skala des mittleren Elektronenabstands) gemittelten Leitfähigkeitstensors aus. Damit erhalten wir, im Rahmen einer Linear-Response-Rechnung, sehr schöne Übereinstimmung mit den Potentialmessungen, die diese Dissertation motivierten, einen kausalen Zusammenhang zwischen der Existenz inkompressibler Streifen und der Existenz von Plateaus im QHE, und ein Verständnis der extremen Genauigkeit, mit der die quantisierten Widerstandswerte reproduziert werden können, unabhängig von Probenmaterial und -geometrie. Im Kapitel 5 untersuchen wir das Zufallspotential, in dem sich die Elektronen bewegen. Wir gehen davon aus, dass sich hinter einer undotierten Schicht eine Ebene mit zufällig verteilten ionisierten Donatoren befindet, deren Coulomb-Potentiale sich zu dem Zufallspotential überlagern. Wir weisen darauf hin, dass sich die langreichweitigen Fluktuationen dieses Potentials anders verhalten als die kurzreichweitigen. Die kurzreichweitigen klingen mit dem Abstand der Donatorebene von der Ebene des 2DES exponentiell ab, werden aber (bei B = 0) nur schwach durch das 2DES abgeschirmt. Diese Fluktuationen haben wir durch die endlichen Leitfähigkeiten und die Stoßverbreiterung der Landau-Niveaus berücksichtigt. Die langreichweitigen Fluktuationen, andererseits, sind nur schwach von der Entfernung der Donatorebene abhängig, werden aber stark vom 2DES abgeschirmt. Diese sollte man bei der selbstkonsistenten Abschirmungsrechnung explizit berücksichtigen. Erste Versuche in dieser Richtung zeigen, dass sie die Quanten-Hall-Plateaus verbreitern, verschieben und stabilisieren können. Sie sollten besonders bei breiten Proben wichtig werden, bei denen sie zusätzliche inkompressible Streifen im Probeninneren verursachen können. Schließlich diskutieren wir in Kapitel 6 Abschirmungseffekte in einem Doppelschichtsystem aus zwei parallelen 2DES. Interessante neue Effekte treten auf, wenn die Schichten verschiedene Dichten haben. Das Auftreten inkompressibler Streifen in der einen Schicht kann dann drastische Auswirkungen auf die andere Schicht haben. Widerstandsmessungen in Abhängigkeit vom Magnetfeld, die kürzlich an solchen Systemen durchgeführt wurden, zeigen, dass am Rande eines QH-Plateaus Hysterese auftritt, d.h. dass die für ansteigendes Magnetfeld gemessene Kurve nicht mit der für abfallendes Magnetfeld gemessenen Kurve übereinstimmt, wenn dieser Magnetfeldbereich in ein QH-Plateau der anderen Schicht fällt. Wir entwickeln ein Modell und beschreiben Modellrechnungen, die dieses Phänomen plausibel machen.
The mechanism of spontaneous symmetry breaking is essential to provide masses to the W and Z gauge bosons and fermions of the SM. We hope to elucidate this mechanism at the next generation of colliders. While the SM has been tested with astonishing precision it is believed to be an effective theory of a more fundamental Great Unified Theory. SUSY is one of the most attractive extensions of the SM of particle physics. Therefore, the search for SUSY is a top priority at the next generation of colliders. Once Higgs bosons are discovered, a precise determination of their properties is necessary to differentiate between different models, in particular the MSSM. A muon collider, running at center of mass energies around the neutral Higgs boson resonances, would allow precise measurements of masses and widths, as well as the couplings to their decay products. In particular their couplings to supersymmetric particles are essential to probe SUSY. Therefore, we study the decays of the heavier CP-even and CP-odd Higgs bosons into lighter chargino or neutralino pairs. In this thesis we have analyzed the polarization effects of the beams and the charginos and neutralinos produced in mu+ mu- annihilation around the center of mass energies of the Higgs boson resonances H and A. For the production of equal charginos we have shown that the ratio of H-chargino and A-chargino couplings can be precisely determined independently of the chargino decay mechanism. This method avoids reference to other experiments and makes only a few model-dependent assumptions. Here we have analyzed the effect of the energy spread and of the error from the non-resonant channels, including an irreducible standard model background contribution. For small tan(beta) the process yields large cross sections of up to a pb. For the production of two different charginos we have shown that the H-A interference can be analyzed using asymmetries of the charge conjugated processes. The asymmetries depend on the muon longitudinal beam polarizations and vanish for unpolarized beams. For the chargino pair production with subsequent two-body decay of one of the charginos we have shown that charge and beam polarization asymmetries in the energy distributions of the decay particles are sensitive to the interference of scalar exchange channels with different CP quantum numbers. This process provides unique information on the interference of overlapping Higgs boson resonances. The effect is larger for regions of parameter space with intermediate values of tan(beta) and light sleptons or LSP neutralinos. For the chargino pair production with subsequent two-body decays of both charginos we have defined energy distribution and angular asymmetries in the final particles, in order to analyze the spin-spin correlations of the charginos. The transverse polarizations of the charginos are sensitive to the CP quantum number of the exchanged Higgs bosons and can thus be used to separate overlapping resonances, as well as to determine the CP quantum number of a single resonance. For equal charginos, these asymmetries are not sensitive to the interference of CP-even and CP-odd Higgs exchange channels. For the neutralino pair production in mu+ mu- annihilation we study similar processes as for chargino production. Line shape measurements of neutralino pair production allow to precisely determine the ratio of H-neutralino and A-neutralino couplings. Neutralino pair production with subsequent two-body decay of one of the neutralinos in the intermediate tan(beta) region is sensitive to the interference of H and A and may be measured with a large statistical significance. The Majorana nature of the neutralinos implies that the beam polarization asymmetries vanish for the remaining production channels. For neutralino pair production with subsequent two-body decays of both neutralinos we analyze similar observables as in chargino production. The main difference consists in the intrinsic relative CP quantum number of the neutralino pair, which depends on the chosen scenario. We have thus shown that the interaction of the Higgs bosons to the gaugino-higgsino sector can be probed at a muon collider in chargino and neutralino pair production, both analyzing the production line-shape around the resonances as well as studying the chargino and neutralino polarizations via their decays.
This thesis contains two major parts: The first part introduces the reader into three independent concepts of treating strongly correlated many body physics. These are, on the analytical side the SO(5)-theory (Chap.3), which poses the general frame. On the numerical side these are the Stochastic Series Expansion (SSE) (Chap.1) and the Contractor Renormalization Group (CORE) approach (Chap. 2}). The central idea of this thesis was to combine these above concepts, in order to achieve a better understanding of the high-T_c superconductors (HTSC). The results obtained by this combination can be found in the second major part of this thesis (chapters 4 and 5). The main idea of this thesis, i.e., to combine the SO(5)-theory with the capabilities of bosonic Quantum-Monte Carlo simulations and those of the CORE approach, has been proven to be a very successful Ansatz. Two different approaches, one based on symmetry and one on renormalization-group arguments, motivate an effective bosonic Hamiltonian. In a subsequent step the effective Hamiltonian has been simulated efficiently using the SSE. The results reproduce salient experiments on high-T_c superconductors. In addition, it has been shown that the model can be extended to capture also charge ordering. These results also form a profound basis for further studies, for example one could address the open question of SO(5)-symmetry restoration at a multicritical point in the extended pSO(5) model, where longer ranged interactions are included.
Die stetige Degradation von Halbleiterlasern, speziell bei Bleichalkogenidlasern, erfordert in spektroskopischen Systemen eine regelmäßige Überwachung typischer Eigenschaften wie Abstimmcharakteristik und Linienbreite. Im Hinblick auf einen möglichst hohen Automatisierungsgrad wird langfristig eine Online-Analysemethode zur Überwachung notwendig sein. Die üblicherweise verwendete Methode, den Laserarbeitspunkt über zugrunde liegende Modenkarten einzustellen, hat den gravierenden Nachteil, dass solche Modenkarten in der Regel nicht unter dynamischen Modulationsbedingungen vermessen wurden. Gerade im dynamischen Fall sind diese Karten empfindlich abhängig gegenüber Veränderungen durch Zyklieren und Degradieren des Lasers. Etalons (Etalonsignale) sind bezüglich der Abstimmcharakteristik nicht zuverlässig genug und von daher für eine wünschenswerte Automatisierung nicht ausreichen. Modensprünge oder schwache Rückkopplungseffekte lassen sich im Interferogramm nicht ohne weiteres identifiziert. Eine erweiterte Analyse der Störungen dieser Interferogramme im Zeit-Frequenzbereich mittels einer AOK(Adaptive Optimal Kernel)-Transformation erwies sich speziell bei Signalen mit wenigen Perioden als deutlich aussagekräftiger. Mittels optischer Homodynmischung wurde die Linienbreite von Bleichalkogenidlasern ermittelt. Bei inkohärenter Überlagerung entspricht die spektrale Verteilung der Mischung der Faltung der ursprünglichen Verteilung mit sich selbst. Der Laser wird dabei nicht abgestimmt, die optische Laufzeitverzögerung wurde mittels integrierter White-Zelle realisiert. Es wurde beobachtet, dass je nach Grad des Rauschens des Injektionsstroms, das Linienbreitenprofil von Lorentz nach Gauß überging. Mit einem externen CO2-Laser als lokalen Oszillator wurden Heterodynmessungen durchgeführt. Die Linienbreite eines CO2-Lasers ist mit wenigen kHz im Vergleich zu derjenigen eines Bleichalkogenidlasers vernachlässigbar und die Überlagerung erfolgt absolut inkohärent. Gemessen wurden spektrale Verteilungen mit typischem Lorentzprofil von 10 MHz bis zu 100 MHz und darüber hinaus. Auffällig waren häufig symmetrische Nebenpeaks, die in den Bereichen der Seitenflanken des Lorentzprofils auftraten. Anhand einer numerischen Simulation eines Modells einer Laserdiode, basierend auf Ratengleichungen mit für Bleichalkogenidlasern typischen Parameterwerten, konnte verdeutlicht werden, dass sich durch das nichtlineare Lasermodell ausgeprägte Vielfache von Resonanzen bereits im Abstand von 25 MHz ausbilden können. Derartige Resonanzen tauchen im E-Feld-Spektrum als typische Relaxationsoszillationen in den Seitenbändern wieder auf und erklären die in der Messung beobachteten Nebenpeaks innerhalb der spektralen Verteilung. Die Stärke der Seitenbänder ist ein Maß für die Korrelation zwischen Phasen- und Amplitudenfluktuationen. Das Modell für die numerische Berechnung des E-Feldes wurde mit einem thermischen Verhalten erweitert. Eine umfassende Charakterisierungsmethode zur automatisierten Einstellung eines modulierten Lasersystems muss dynamisch und zeitaufgelöst erfolgen. Die Auswertung optischer Mischfrequenzen beschränkt sich dabei nicht mehr auf die direkte Interpretation von einzelnen Spektren, sondern erweitert sich auf die Analyse im Zeit-Frequenzraum. Für eine direkte und schnelle Zeitfrequenztransformation bietet sich ein „Gefensterte Fouriertransformation“ (STFT) an, die sich außerdem relativ einfach in moderne Signalprozessortechnik implementieren lässt. Sie erweist sich als sehr robust und für die hier erforderliche Analyse von Heterodynsignalen als ausreichend. Mit der Festlegung des Analysefensters innerhalb einer STFT ist die Auflösung in Zeit und Frequenz fest definiert. Analysen von Mischsignalen mit einer kontinuierlichen Wavelettransformation haben vergleichsweise gezeigt, dass Details im Zeitfrequenzraum zwar besser herausgearbeitet werden können, jedoch ist der Rechenaufwand durch die variable Skalierung und somit stark redundante Analyse und ihre Darstellung unverhältnismäßig größer. Eine Analyse des Linienbreitenprofils erfolgt dabei über die Entwicklung der Skalierung eines Signals. Die über Heterodynsignale ermittelte effektive Linienbreite bei einer modulierten Abstimmung sollte eher als „dynamische“ oder „intrinsische“ Laserlinienbreite bezeichnet werden. Eine direkte Korrelation der Frequenzvariation des Lasers mit dem Stromrauschen des Injektionsstroms ist offensichtlich. Die wirksame Bandbreite des Stromrauschens wird durch die Systemelektronik einerseits und die Modulationsbandbreite des Lasers andererseits begrenzt. Außer den wichtigen Parametern wie Abstimmung und Linienbreite lassen sich über die dynamische Zeitfrequenzanalyse von Heterodynsignalen darüber hinaus weitere Phänomene wie Rückkopplung, Modenüberlagerung oder Einschwingverhalten aufgrund direkter Kopplung zwischen Intensitäts und Frequenzmodulation beobachten.
Wir studieren die Produktion und den Nachweis von Selektronen mit Massen jenseits der Schwelle zur Paarerzeugung an künftigen Linearbeschleunigern mit Schwerpunktsenergien von 500 GeV und 800 GeV. Hierzu betrachten wir die Produktion von linken und rechten Selektronen in Assoziation mit dem jeweils leichtesten Neutralino oder Chargino durch Elektron-Elektron-, Elektron-Positron- und Elektron-Photon-Streuung im Rahmen des MSSM. Die Produktion durch Elektron-Elektron-Streuung untersuchen wir zusätzlich in zwei erweiterten Modellen, dem NMSSM und einem E6-Modell mit einem zusätzlichen U(1)-Eichfaktor.
The astronomical exploration at energies between 30\,GeV and $\lesssim$\,350\,GeV was the main motivation for building the \MAGIC-telescope. With its 17\,m \diameter\ mirror it is the worldwide largest imaging air-Cherenkov telescope. It is located at the Roque de los Muchachos at the Canary island of San Miguel de La Palma at 28.8$^\circ$\,N, 17.8$^\circ$\,W, 2200\,m a.s.l. The telescope detects Cherenkov light produced by relativistic electrons and positrons in air showers initiated by cosmic gamma-rays. The imaging technique is used to powerfully reject the background due to hadronically induced air showers from cosmic rays. Their inverse power-law energy-distribution leads to an increase of the event rate with decreasing energy threshold. For \MAGIC this implies a trigger rate in the order of 250\,Hz, and a correspondingly large data stream to be recorded and analyzed. A robust analysis software package, including the general framework \MARS, was developed and commissioned to allow automation, necessary for data taken under variable observing conditions. Since many of the astronomical sources of high-energy radiation, in particular the enigmatic gamma-ray bursts, are of a transient nature, the telescope was designed to allow repositioning in several tens of seconds, keeping a tracking accuracy of $\lesssim\,$0.01$^\circ$. Employing a starguider, a tracking accuracy of $\lesssim\,$1.3\,minutes of arc was obtained. The main class of sources at very high gamma-ray energies, known from previous imaging air-Cherenkov telescopes, are Active Galactic Nuclei with relativistic jets, the so-called high-peaked Blazars. Their spectrum is entirely dominated by non-thermal emission, spanning more than 15 orders of magnitude in energy, from radio to gamma-ray energies. Predictions based on radiation models invoking a synchrotron self-Compton or hadronic origin of the gamma-rays suggest, that a fairly large number of them should be detectable by \MAGIC. Promising candidates have been chosen from existing compilations, requiring high (synchrotron) X-ray flux, assumed to be related to a high (possibly inverse-Compton) flux at GeV energies, and a low distance, in oder to avoid strong attenuation due to pair-production in interactions with low-energy photons from the extragalactic background radiation along the line of sight. Based on this selection the first \AGN, emitting gamma-rays at 100\,GeV, 1ES\,1218+304 at a redshift of $z=0.182$, was discovered, one of the two farthest known \AGN emitting in the TeV energy region. In this context, the automated analysis chain was successfully demonstrated. The source was observed in January 2005 during six moonless nights for 8.2\,h. At the same time the collaborating \KVA-telescope, located near the \MAGIC site, observed in the optical band. The lightcurve calculated showed no day-to-day variability and is compatible with a constant flux of $F($\,$>$\,$100\,\mbox{GeV})=(8.7\pm1.4) \cdot 10^{-7}\,\mbox{m}^{-2}\,\mbox{s}^{-1}$ within the statistical errors. A differential spectrum between 87\,GeV and 630\,GeV was calculated and is compatible with a power law of $F_E(E) = (8.1\pm 2.1) \cdot 10^{-7}(E/\mbox{250\,GeV})^{-3.0\pm0.4}\,\mbox{TeV}^{-1}\,\mbox{m}^{-2}\,\mbox{s}^{-1}$ within the statistical errors. The spectrum emitted by the source was obtained by taking into account the attenuation due to pair-production with photons of the extragalactic background at low photon energies. A homogeneous, one-zone synchrotron self-Compton model has been fitted to the collected multi-wavelength data. Using the simultaneous optical data, a best fit model could be obtained from which some physical properties of the emitting plasma could be inferred. The result was compared with the so-called {\em Blazar sequence}.
Neural networks can synchronize by learning from each other. For that purpose they receive common inputs and exchange their outputs. Adjusting discrete weights according to a suitable learning rule then leads to full synchronization in a finite number of steps. It is also possible to train additional neural networks by using the inputs and outputs generated during this process as examples. Several algorithms for both tasks are presented and analyzed. In the case of Tree Parity Machines the dynamics of both processes is driven by attractive and repulsive stochastic forces. Thus it can be described well by models based on random walks, which represent either the weights themselves or order parameters of their distribution. However, synchronization is much faster than learning. This effect is caused by different frequencies of attractive and repulsive steps, as only neural networks interacting with each other are able to skip unsuitable inputs. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. If the synaptic depth is increased, the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Therefore the partners can reach any desired level of security by choosing suitable parameters. In addition, the entropy of the weight distribution is used to determine the effective number of keys, which are generated in different runs of the key-exchange protocol using the same sequence of input vectors. If the common random inputs are replaced with queries, synchronization is possible, too. However, the partners have more control over the difficulty of the key exchange and the attacks. Therefore they can improve the security without increasing the average synchronization time.
In a first part the bilayer Heisenberg Model and the 2D Kondo necklace model are studied. Both models exhibit a quantum phase transition between an ordered and disordered phase. The question is addressed to the coupling of a single doped hole to the critical fluctuations. A self-consistent Born approximation predicts that the doped hole couples to the magnons such that the quasiparticle residue vanishes at the quantum critical point. In this work the delicate question about the fate of the quasiparticle residue across the quantum phase transition is also tackled by means of large scale quantum Monte Carlo simulations. Furthermore the dynamics of a single hole doped in the magnetic background is investigated. In the second part an analysis of the spiral staircase Heisenberg ladder is presented. The ladder consists of two ferromagnetic coupled spin-1/2 chains, where the coupling within the second chain can be tuned by twisting the ladder. Within this model the crossover between an ungapped spin-1/2 system and a gapped spin-1 system can be studied. In this work the emphasis is on the opening of the spin gap with respect to the ferromagnetic rung coupling. It is shown that there are essential differences in the scaling behavior of the spin gap depending on the twist of the model. Moreover, by means of the string order parameter it is shown, that the system remains in the Haldane phase within the whole parameter range although the spin gap scales differently. The tools which are used for the analyses are mainly large scale quantum Monte Carlo methods, but also exact diagonalization techniques as well as mean field approaches.
The basic question which drove our whole work was to find a meaningful noncommutative gauge theory even for the time-like case ($\theta^{0 i} \neq 0$). In order to be able to tackle questions regarding unitarity, it is not sufficient to consider theories which include the noncommutative parameter only up to a finite order. The reason is that in order to investigate tree-level unitarity or the optical theorem in loops one has to know the behavior of the noncommutative theory for center-of-mass energies much greater than the noncommutative scale. Therefore an effective theory, that is by construction only valid up to the noncommutative scale, isn't sufficient for our purpose. Our model is based on two fundamental assumptions. The first assumption is given by the commutation relations \eqref{eq:ncalg}. This led to the Moyal-Weyl star-product \eqref{eq:astproduct2} which replaces all point-like products between two fields. The second assumption is to assume that the model built this way is not only invariant under the noncommutative gauge transformation but also under the commutative one. In order to obtain an action of such a model one has to replace the fields by their appropriate \swms. We chose the gauge fixed action \eqref{eq:actioncgf} as the fundamental action of our model. After having constructed the action of the NCQED including the {\swms} we were confronted with the problem of calculating the {\swms} to all orders in $\tMN$. By means of \cite{bbg} we could calculate the {\swms} order by order in the gauge field, where each order in the gauge field contains all orders in the noncommutative parameter (\cf chapter \ref{chapter:swms}). By comparing the maps with the result we obtained from an alternative ansatz \cite{bcpvz}, we realized that already the simplest {\swm} for the gauge field is not unique. In chapter \ref{chapter:ambiguities} we examined this ambiguity, which we could parametrised by an arbitrary function $\astf$. The next step was to derive the Feynman rules for our NCQED. One finds that the propagators remain unchanged so that the free theory is equal to the commutative QED. The fermion-fermion-photon vertex contains not only a phase factor coming from the Moyal-Weyl star-product but also two additional terms which have their origin in the \swms. Beside the 3-photon vertex which is already present in NCQED without {\swms} and which has also additional terms coming from the \swms, too, one has a contact vertex which couples two fermions with two photons. After having derived all the vertices we calculated the pair annihilation scattering process $e^+ e^- \rightarrow \gamma \gamma$ at Born level. By choosing the parameter $\kggg = 1$ (\cf section \ref{sec:represent}), we found that the amplitude of the pair annihilation process becomes equal to the amplitude of the NCQED without \swms. This means that, at least for this process, the NCQED excluding {\swms} is only a special case of NCQED including \swms. On the basis of the pair annihilation process, we afterwards investigated tree-level unitarity. In order to satisfy the tree-level unitarity we had to constrain the arbitrary function $\astf$. We found that the series expansion of $\astf$ has to start with unity. In addition, the even part of the function must not increase faster than $s^{-1/2} \log(s)$ for $s \rightarrow \infty$, whereas the odd part of the $\astf$-function can't be constrained, at least by the process we considered. By assuming these constrains for the $\astf$-function, we could show that tree-level unitarity is satisfied if one incorporates the uncertainties present in the energy and the momenta of the scattered particles, \ie the uncertainties of the center-of-mass energy and the scattering angles. This uncertainties are not exclusively present due to the finite experimental resolution. A delta-like center-of-mass energy as well as delta-like momenta are in general not possible because the scattered particles are never exact plane waves.
Despite its precise agreement with the experiment, the validity of the standard model (SM) of elementary particle physics is ensured only up to a scale of several hundred GeV so far. Even more, the inclusion of gravity into an unifying theory poses a problem which cannot be solved by ordinary quantum field theory (QFT). String theory, which is the most popular ansatz for a unified theory, predicts QFT on noncommutative space-time as a low energy limit. Nevertheless, independently of the motivation given by string theory, the nonlocality inherent to noncommutative QFT opens up the possibility for the inclusion of gravity. There are no theoretical predictions for the energy scale Lambda_NC at which noncommutative effects arise and it can be assumed to lie in the TeV range, which is the energy range probed by the next generation of colliders. Within this work we study the phenomenological consequences of a possible realization of QFT on noncommutative space-time relying on this assumption. The motivation for this thesis was given by the gap in the range of phenomenological studies of noncommutative effects in collider experiments, due to the absence in the literature of Large Hadron Collider (LHC) studies regarding noncommutative QFTs. In the first part we thus performed a phenomenological analysis of the hadronic process pp -> Z gamma -> l^+l^- gamma at the LHC and of electron-positron pair annihilation into a Z boson and a photon at the International Linear Collider (ILC). The noncommutative extension of the SM considered within this work relies on two building blocks: the Moyal-Weyl star-product of functions on ordinary space-time and the Seiberg-Witten maps. The latter relate the ordinary fields and parameters to their noncommutative counterparts such that ordinary gauge transformations induce noncommutative gauge transformations. This requirement is expressed by a set of inhomogeneous differential equations (the gauge equivalence equations) which are solved by the Seiberg-Witten maps order by order in the noncommutative parameter Theta. Thus, by means of the Moyal-Weyl star-product and the Seiberg-Witten maps a noncommutative extension of the SM as an effective theory as expansion in powers of Theta can be achieved, providing the framework of our phenomenological studies. A consequence of the noncommutativity of space-time is the violation of rotational invariance with respect to the beam axis. This effect shows up in the azimuthal dependence of cross sections, which is absent in the SM as well as in other models beyond the SM. Thus, the azimuthal dependence of the cross section is a typical signature of noncommutativity and can be used in order to discriminate it against other new physics effects. We have found this dependence to be best suited for deriving the sensitivity bounds on the noncommutative scale Lambda_NC. By studying pp -> Z gamma -> l^+l^- gamma to first order in the noncommutative parameter Theta, we show in the first part of this work that measurements at the LHC are sensitive to noncommutative effects only in certain cases, giving bounds on the noncommutative scale of Lambda_NC > 1.2 TeV. Our result improved the bounds present in the literature coming from past and present collider experiments by one order of magnitude. In order to explore the whole parameter range of the noncommutativity, ILC studies are required. By means of e^+e^- -> Z gamma -> l^+l^- gamma to first order in Theta we have shown that ILC measurements are complementary to LHC measurements of the noncommutative parameters. In addition, the bounds on Lambda_NC derived from the ILC are significantly higher and reach Lambda_NC > 6 TeV. The second part of this work arose from the necessity to enlarge the range of validity of our model towards higher energies. Thus, we expand the neutral current sector of the noncommutative SM to second order in $\theta$. We found that, against the general expectation, the theory must be enlarged by additional parameters. The new parameters enter the theory as ambiguities of the Seiberg-Witten maps. The latter are not uniquely determined and differ by homogeneous solutions of the gauge equivalence equations. The expectation was that the ambiguities correspond to field redefinitions and therefore should vanish in scattering matrix elements. However, we proved that this is not the case, and the ambiguities do affect physical observables. Our conjecture is, that every order in Theta will introduce new parameters to the theory. However, only the experiment can decide to what extent efforts with still higher orders in Theta are reasonable and will also give directions for the development of theoretical models of noncommutative QFTs.
Calculations of multi-particle processes at the one-loop level: precise predictions for the LHC
(2007)
The Standard Model (SM) of elementary particle physics provides a uniform framework for the description of three fundamental forces, the electromagnetic and weak forces, describing interactions between quarks and leptons, and the strong force, describing a much stronger interaction between the coloured quarks. Numerous experimental tests have been performed in the last thirty years, showing a spectacular agreement with the theoretical predictions of the Standard Model, even at the per mille level, therefore validating the model at the quantum level. An important cornerstone of the Standard Model is the Higgs mechanism, which provides a possible explanation of electroweak symmetry breaking, responsible for the masses of elementary fermions and the W and Z bosons, the carriers of the weak force. This mechanism predicts a scalar boson, the Higgs boson, which has escaped its discovery so far. If the Higgs mechanism is indeed realised in nature, the upcoming Large Hadron Collider (LHC) at CERN will be able to find the associated Higgs boson. The discovery of a Higgs boson by itself is not sufficient to establish the Higgs mechanism, the basic ingredient being the Higgs potential which predicts trilinear and quartic couplings. These have to be confirmed experimentally by the study of multi-Higgs production. We therefore present a calculation of the loop-induced processes gg to HH and gg to HHH, and investigate the observability of multi-Higgs boson production at the LHC in the Standard Model and beyond. While the SM cross sections are too small to allow observation at the LHC, we demonstrate that physics beyond the SM can lead to amplified, observable cross sections. Furthermore, the applicability of the heavy top quark approximation in two- and three-Higgs boson production is investigated. We conclude that multi-Higgs boson production at the SuperLHC is an interesting probe of Higgs sectors beyond the SM and warrants further study. Despite the great success of the SM, it is widely believed that this model cannot be valid for arbitrarily high energies. The LHC will probe the TeV scale and theoretical arguments indicate the appearance of physics beyond the SM at this scale. The search for new physics requires a precise understanding of the SM. Precise theoretical predictions are needed which match the accuracy of the experiments. For the LHC, most analyses require next-to-leading order (NLO) precision. Only then will we be able to reliably verify or falsify different models. At the LHC, many interesting signatures involve more than two particles in the final state. Precise theoretical predictions for such multi-leg processes are a highly nontrivial task and new efficient methods have to be applied. The calculation of the process PP to VV+jet at NLO is an important background process to Higgs production in association with a jet at the LHC. We compute the virtual corrections to this process which form the "bottleneck" for obtaining a complete NLO prediction. The resulting analytic expressions are generated with highly automated computer routines and translated into a flexible Fortran code, which can be employed in the computation of differential cross sections of phenomenological interest. The obtained results for the virtual corrections indicate that the QCD corrections are sizable and should be taken into account in experimental studies for the LHC.
In this thesis, the low-temperature regime of replica symmetry breaking in the SK-model has been thoroughly investigated. In order to access this regime and to perform self-consistence calculations with high accuracy at high orders of replica symmetry breaking, a formalism has been developed which reduces the numerical effort to the absolute minimum. The central idea of its derivation is the identification of asymptotic regions in which the recursion relations can be solved analytically. The new object in the numerical treatment is then the correction to this asymptotic regime, represented by a sequence of so-called kernel correction functions. This method increased the effciency of the numerics considerably so that up to 200 orders of RSB could be calculated at zero temperature and zero external field, and up to 60 (65) orders of RSB for finite temperature (external field). The remarkable high precision of these calculations allowed the extraction of several quantities with accuracy exceeding the literature values by several orders of magnitude. The results of the numerical calculations have been analyzed in great detail. Especially the convergence behavior of various observables and of the order function with respect to the RSB order has been investigated since the high but finite RSB regime has been addressed in the present work for the first time. Several unexpected features of finite order replica symmetry breaking have been observed.
It is aim of this work to develop, implement, and apply a new numerical scheme for modeling turbulent, multiphase astrophysical flows such as galaxy cluster cores and star forming regions. The method combines the capabilities of adaptive mesh refinement (AMR) and large-eddy simulations (LES) to capture localized features and to represent unresolved turbulence, respectively; it will be referred to as Fluid mEchanics with Adaptively Refined Large-Eddy SimulationS or FEARLESS.
In this PhD thesis, we study the heteroepitaxial crystal growth by means of Monte Carlo simulations. Of particular interest in this work is the influence of the lattice mismatch of the adsorbates relative to the substrate on surface structures. In the framework of an off-lattice model, we consider one monolayer of adsorbate and investigate the emerging nanopatterns in equilibrium and their formation during growth. In chapter 1, a brief introduction is given, which describes the role of computer simulations in the field of the physics of condensed matter. Chapter 2 is devoted to some technical basics of experimental methods of molecular beam epitaxy and the theoretical description. Before a model for the simulation can be designed, it is necessary to make some considerations of the single processes which occur during epitaxial growth. For that purpose we look at an experimental setup and extract the main microscopic processes. Afterwards a brief overview of different theoretical concepts describing that physical procedures is given. In chapter 3, the model used in the simulations is presented. The aim is to investigate the growth of an fcc crystal in the [111] direction. In order to keep the simulation times within a feasible limit a simple pair potential, the Lennard-Jones potential, with continuous particle positions is used, which are necessary to describe effects resulting from the atomic mismatch in the crystal. Furthermore the detailed algorithm is introduced which is based on the idea to calculate the barrier of each diffusion event and to use the barriers in a rejection-free method. Chapter 4 is attended to the simulation of equilibrium. The influence of different parameters on the emerging structures in the first monolayer upon the surface, which is completely covered with two adsorbate materials, is studied. Especially the competition between binding energy and strain leads to very interesting pattern formations like islands or stripes. In chapter 5 the results of growth simulations are presented. At first, we introduce a model in order to realize off-lattice Kinetic Monte Carlo simulations. Since the costs in simulation time are enormous, some simplifications in the calculation of diffusion barriers are necessary and therefore the previous model is supplemented with some elements from the so-called ball and spring model. The next point is devoted to the calculation of energy barriers followed by the presentation of the growth simulations. Binary systems with only one sort of adsorbate are investigated as well as ternary systems with two different adsorbates. Finally, a comparison to the equilibrium simulations is drawn. Chapter 6 contains some concluding remarks and gives an outlook to possible further investigations.
In this PhD thesis, the effect of strain on heteroepitaxial growth is investigated by means of Kinetic Monte Carlo simulations. In this context the lattice misfit, arising from the different lattice constants of the adsorbate and the substrate material, is of particular interest. As a consequence, this lattice misfit leads to long-range elastic strain effects having strong influence on the entire growing crystal and its resulting surface morphology. The main focus of this work is the investigation of different strain relaxation mechanisms and their controlling parameters, revealing interesting consequences on the subsequent growth. Since epitaxial growth is carried out under conditions far away from thermodynamic equilibrium, it is strongly determined by surface kinetics. At this point the relevant kinetic microscopic processes are described, followed by theoretical considerations of heteroepitaxial growth disclosing an overview over several independent methodological streams, used to model epitaxy in different time and length scales, as well as the characterization of misfit dislocations and the classification of epitaxial growth modes based on thermodynamic considerations. The epitaxial growth is performed by means of Kinetic Monte Carlo simulations which allows for the consideration of long range effects in systems with lateral extension of few hundred atoms. By using an off-lattice simulation model the particles are able to leave their predefined lattice sites, which is an indispensable condition for simulating strain relaxation mechanisms. The main idea of our used model is calculating the activation energy of all relevant thermally activated processes by using simple pair potentials and then realizing the dynamics by performing each event according to its probability by means of a rejection-free algorithm method. In addition, the crystal relaxation procedure, the grid-based particle access method, which accelerates the simulation enormously, and the efficient implementation of the algorithm are discussed. To study the influence of long range elastic strain effects, the main part of this work was realized on the two dimensional triangular lattice, which can be treated as a cross section of the real three dimensional case. Chapter 4 deals with the formation of misfit dislocations as a strain relaxation mechanism and the resulting consequences on the subsequent heteroepitaxial growth. We can distinguish between two principally different dislocation formation mechanisms, depending strongly on the sign as well as on the magnitude of the misfit, but also the surface kinetics need to be taken into account. Additionally, the dislocations affect the lattice spacings of the crystal whose observed progression is in qualitative good agreement with experimental results. Furthermore, the dislocations influence the subsequent growth of the adsorbate film, since the potential energy of an adatom is modulated by buried dislocations. A clear correlation between the lateral positions of buried dislocations and the positions of mounds grown on the surface can be observed. In chapter 5, an alternative strain relaxation mechanism is studied: the formation of three dimensional islands enables the particles to approach their preferred lattice spacing. We demonstrate that it is possible to adjust within our simulation model each of the three epitaxial growth modes: Volmer–Weber, Frank–van der Merve or layer-by-layer, and Stranski–Krastanov growth mode. Moreover, we can show that the emerging growth mode depends in principle on two parameters: on the one hand the interaction strength of adsorbate particles with each other, compared to the interaction of adsorbate with substrate particles, and on the other hand the lattice misfit between adsorbate and substrate particles. A sensible choice of these two parameters allows the realization of each growth mode within the simulations. In conclusion, the formation of nanostructures controlled by an underlying dislocation network can be applied in the concept of self-organized pattern formation as well as by the tendency to form ordered arrays of strain-induced three dimensional grown islands. In chapter 6, we extend our model to three dimensions and investigate the effect of strain on growth on bcc(100) surfaces. We introduce an anisotropic potential yielding a stable bcc lattice structure within the off-lattice representation. We can show that the strain built up in submonolayer islands is mainly released at the island edges and the lattice misfit has strong influence on the diffusion process on the plane surface as well as on the situation at island edges with eminent consequences on the appearance of submonolayer islands.
We model Milky Way like isolated disk galaxies in high resolution three-dimensional hydrodynamical simulations with the adaptive mesh refinement code Enzo. The model galaxies include a dark matter halo and a disk of gas and stars. We use a simple implementation of sink particles to measure and follow collapsing gas, and simulate star formation as well as stellar feedback in some cases. We investigate two largely different realizations of star formation. Firstly, we follow the classical approach to transform cold, dense gas into stars with an fixed efficiency. These kind of simulations are known to suffer from an overestimation of star formation and we observe this behavior as well. Secondly, we use our newly developed FEARLESS approach to combine hydrodynamical simulations with a semi-analytic modeling of unresolved turbulence and use this technique to dynamically determine the star formation rate. The subgrid-scale turbulence regulated star formation simulations point towards largely smaller star formation efficiencies and henceforth more realistic overall star formation rates. More work is necessary to extend this method to account for the observed highly supersonic turbulence in molecular clouds and ultimately use the turbulence regulated algorithm to simulate observed star formation relations.
Die Untersuchung stark korrelierter Elektronensysteme anhand des zweidimensionalen Hubbard-Modells bildet das zentrale Thema dieser Arbeit. Wir analysieren das Schicksal des Mott-Isolators bei Dotierung als auch bei Reduzierung der Wechselwirkungsstärke. Die numerische Auswertung erfolgt mit Hilfe von Quanten-Cluster-Approximationen, die eine thermodynamisch konsistente Beschreibung der Grundzustandseigenschaften garantieren. Der hier verwendete Rahmen der Selbstenergiefunktional-Theorie bietet eine große Flexibilität bei der Konstruktion von Cluster-Näherungen. Eine detaillierte Analyse gibt Aufschluss über die Qualität und das Konvergenzverhalten unterschiedlicher Cluster-Näherungen innerhalb der Selbstenergiefunktional-Theorie. Wir verwenden für diese Untersuchungen das eindimensionale Hubbard-Modell und vergleichen unsere Resultate mit der exakten Lösung. In zwei Dimensionen finden wir als Grundzustand des Teilchen-Loch-symmetrischen Modells bei Halbfüllung einen antiferromagnetischen Isolator unabhängig von der Wechselwirkungsstärke. Die Berücksichtigung kurzreichweitiger räumlicher Korrelationen durch unsere Cluster-Näherung führt, im Vergleich mit der dynamischen Mean-Field-Theorie, zu einer deutlichen Verbesserung des antiferromagnetischen Ordnungsparameters. Darüberhinaus beobachten wir in der paramagnetischen Phase einen Metall-Isolator-Übergang als Funktion der Wechselwirkungsstärke, der sich qualitativ vom reinen Mean-Field-Szenario unterscheidet. Ausgehend vom antiferromagnetischen Mott-Isolator zeigt sich ein füllungsgetriebener Metall-Isolator-Übergang in eine paramagnetische metallische Phase. Abhängig von der verwendeten Cluster-Approximation tritt dabei zunächst eine antiferromagnetische metallische Phase auf. Neben langreichweitiger antiferromagnetischer Ordnung haben wir in unseren Rechnungen auch Supraleitung berücksichtigt. Das Verhalten des supraleitenden Ordnungsparameters als Funktion der Dotierung ist dabei in guter Übereinstimmung sowohl mit anderen numerischen Verfahren als auch mit experimentellen Ergebnissen.
Blazars are among the most luminous sources in the universe. Their extreme short-time variability indicates emission processes powered by a supermassive black hole. With the current generation of Imaging Air Cherenkov Telescopes, these sources are explored at very high energies. Lowering the threshold below 100 GeV and improving the sensitivity of the telescopes, more and more blazars are discovered in this energy regime. For the MAGIC telescope, a low energy analysis has been developed allowing to reach energies of 50 GeV for the first time. The method is presented in this thesis at the example of PG 1553+113 measuring a spectrum between 50 GeV and 900 GeV. In the energy regime observed by MAGIC, strong attenuation of the gamma-rays is expected from pair production due to interactions of gamma-rays with low-energy photons from the extragalactic background light. For PG 1553+113, this provides the possibility to constrain the redshift of the source, which is still unknown. Well studied from radio to x-ray energies, PG 1553+113 was discovered in 2005 in the very high energy regime. In total, it was observed with the MAGIC telescope for 80~hours between April 2005 and April 2007. From more than three years of data taking, the MAGIC telescope provides huge amounts of data and a large number of files from various sources. To handle this data volume and to provide monitoring of the data quality, an automatic procedure is essential. Therefore, a concept for automatic data processing and management has been developed. Thanks to its flexibility, the concept is easily applicable to future projects. The implementation of an automatic analysis is running stable since three years in the data center in Würzburg and provides consistent results of all MAGIC data, i.e. equal processing ensures comparability. In addition, this database controlled system allows for easy tests of new analysis methods and re-processing of all data with a new software version at the push of a button. At any stage, not only the availability of the data and its processing status is known, but also a large set of quality parameters and results can be queried from the database, facilitating quality checks, data selection and continuous monitoring of the telescope performance. By using the automatic analysis, the whole data sample can be analyzed in a reasonable amount of time, and the analyzers can concentrate on interpreting the results instead. For PG 1553+113, the tools and results of the automatic analysis were used. Compared to the previously published results, the software includes improvements as absolute pointing correction, absolute light calibration and improved quality and background-suppression cuts. In addition, newly developed analysis methods taking into account timing information were used. Based on the automatically produced results, the presented analysis was enhanced using a special low energy analysis. Part of the data were affected by absorption due to the Saharan Air Layer, i.e. sanddust in the atmosphere. Therefore, a new method has been developed, correcting for the effect of this meteorological phenomenon. Applying the method, the affected data could be corrected for apparent flux variations and effects of absorption on the spectrum, allowing to use the result for further studies. This is especially interesting, as these data were taken during a multi-wavelength campaign. For the whole data sample of 54 hours after quality checks, a signal from the position of PG 1553+113 was found with a significance of 15 standard deviations. Fitting a power law to the combined spectrum between 75 GeV and 900 GeV, yields a spectral slope of 4.1 +/- 0.2. Due to the low energy analysis, the spectrum could be extended to below 50 GeV. Fitting down to 48 GeV, the flux remains the same, but the slope changes to 3.7 +/- 0.1. The determined daily light curve shows that the integral flux above 150 GeV is consistent with a constant flux. Also for the spectral shape no significant variability was found in three years of observations. In July 2006, a multi-wavelength campaign was performed. Simultaneous data from the x-ray satellite Suzaku, the optical telescope KVA and the two Cherenkov experiments MAGIC and H.E.S.S. are available. Suzaku measured for the first time a spectrum up to 30 keV. The source was found to be at an intermediate flux level compared to previous x-ray measurements, and no short time variability was found in the continuous data sample of 41.1 ksec. Also in the gamma regime, no variability was found during the campaign. Assuming a maximum slope of 1.5 for the intrinsic spectrum, an upper limit of z < 0.74 was determined by deabsorbing the measured spectrum for the attenuation of photons by the extragalactic background light. For further studies, a redshift of z = 0.3 was assumed. Collecting various data from radio, infrared, optical, ultraviolet, x-ray and gama-ray energies, a spectral energy distribution was determined, including the simultaneous data of the multi-wavelength campaign. Fitting the simultaneous data with different synchrotron-self-compton models shows that the observed spectral shape can be explained with synchrotron-self-compton processes. The best result was obtained with a model assuming a log-parabolic electron distribution.
At the beginning of regular observations with the MAGIC telescope in December 2004, all but one extragalactic sources detected at very high energy (VHE) gamma-rays belonged to the class of high frequency peaked BL Lac (HBL) objects. This motivated a systematic scan of candidate sources to increase the number of known sources and to study systematically their spectral properties. As candidate sources for VHE emission, X-ray bright HBLs were selected from a compilation of active galactic nuclei. The MAGIC observations took place from December 2004 to March 2006. The declination of the objects was restricted to values between -1.2° and +58.8° corresponding to a maximum zenith distance lower than 30° at culmination. Since gamma-rays are absorbed by photo-pair production in low energy background radiation fields, the redshift of the investigated objects was limitetd to z < 0.3. Under the assumption that HBLs generally emit the same energy flux at 1keV as at 200GeV, only the brightest X-ray sources were observed, leading to a cut in the X-ray flux of F(1keV) > 2µJy}. Of the fourteen sources observed, four have been detected: 1ES 1218+304 (for the first time at very high energies), 1ES 2344+514 (strong detection in a state of low activity), Mrk 421 and Mrk 501. A hint of a signal on a 3-sigma-level from the direction of 1ES 1011+496 has been observed. In the meantime the object has been confirmed as a source of VHE gamma-rays by a second MAGIC observation campaign triggered by an optical outburst. For ten sources, upper limits on their integral fluxes above 200GeV have been calculated on a 99% confidence level. To cross calibrate the different data samples, collected during 14 months, bright muon ring images have been used, recorded as background events by the MAGIC telescope. Based on the development by Meyer (2003), the method has been improved and implemented into the automatic data analysis as a continuous monitor of the calibration and the point spread function of the optical system. While the ring images are generated by muons with small impact parameters, it could be shown that the image parameter distributions for muons with large impact parameters and gamma showers completely overlap, revealing these muons as the dominant background for gamma-ray observations below energies of 150GeV. The sample of HBLs (including all HBLs detected at VHE so far) has been investigated for correlations between broad-band spectral indices as determined from simultaneous optical, archival X-ray and radio luminosities, finding that the VHE emitting HBLs do not differ from the non-detected ones. In general the absorption corrected HBL gamma-ray luminosities at 200GeV are not higher than their X-ray luminosities at 1keV. Based on a complete X-ray BL Lac sample, the Hamburg/ROSAT X-ray BL Lac sample, the number of expected VHE sources has been estimated for the performed scan, finding a consistent number under the assumption of a 37% completeness of the investigated sample and a 1keV-to-200GeV luminosity ratio of 1.4. An upper limit on the omnidirectional flux at 200GeV has been calculated by interpolating the sum over the observed fluxes and upper limits. Within the uncertainties, the result is in agreement with the expectations derived from the X-ray luminosity function of BL Lacs. For 1ES 1218+304 and 1ES 2344+514 the lightcurves have been derived, showing evidence for flux variability on a time scale of 17 days and 24h, respectively. In the case of 1ES 1218+304 variability has been reported for the first time at VHEs. For both sources the energy spectra have been reconstructed and discussed in the context of their broad band spectral energy distribution (SED), using a single zone synchrotron self Compton model. The SEDs are well fitted by the simulation even though the very high peak frequencies at gamma-rays push the model to its limits. The parameters derived from the simulation are in good agreement with the parameters found for similar HBLs.
This thesis is concerned with the description of macroscopic geometries through Loop Quantum Gravity, and there particularly with the description of cosmology within full Loop Quantum Gravity. For this purpose we depart from two distinct (classically virtually equivalent) ansätze: One is phase space reduction and the other is the restriction to particular states. It turns out that the quantum analogue of these two approaches are fundamentally different: The quantum analogue of phase space reduction needs the reformulation in terms of the observable Poisson algebra, so it can be applied to the noncommutative quantum phase space: It rests on the observation that the observable Poisson algebra of classical canonical cosmology is induced by the embedding of the reduced cosmological phase space into the phase space of full General Relativity. Using techniques related to Rieffel-induction, we develop a construction for a noncommutative embedding that has a classical limit that is described by a Poisson embedding. To be able to use this class of noncommutative embeddings for Loop Quantum Gravity, one needs a complete group of diffeomorphisms for the quantum theory, which is constructed. These two results are applied to construct a quantum embedding of a cosmological sector into full Loop Quantum Gravity. The embedded cosmological sector turns out to be discrete, like standard Loop Quantum Cosmology and can be interpreted as a super-selection sector thereof; however due to pathologies of the dynamics of full Loop Quantum Gravity, one can not induce a meaningful dynamics for this cosmological sector. The quantum analogue of restricting the space of states is achieved by explicitly constructing states for Loop Quantum Gravity with smooth geometry. These states do not exist within the Hilbert space of Loop Quantum Gravity, but as states on the observable algebra of Loop Quantum Gravity. This observable algebra is built from spin network functions, area operators and a restricted set of fluxes. For this algebra to be physically complete, we needed to construct a version of Loop Quantum Geometry based on a fundamental area operator. This version of Loop Quantum Geometry is constructed. Since the smooth geometry states are not in the Hilbert space of standard Loop Quantum Gravity, we needed to calculate the Hilbert space representation that contains them using the GNS construction. This representation of the observable algebra can be illustrated as a classical condensate of geometry with quantum fluctuations thereon. Using these representations we construct a quantum-minisuperspace, which allows for an interpretation of standard Loop Quantum Cosmology in terms of these states and led us to conjecture a new approach for the implementation of dynamics for Loop Quantum Gravity.
Supersymmetry is currently the best motivated extension of the Standard Model and will be subject to extensive studies in the upcoming generation of colliders. The e-e- mode would be a straight forward extension to the currently planed International Linear Collider, planned to operate in e+e- mode. The low background in this mode may prove advantageous in the study of CP- and Lepton Flavour Violtation. In this work a CP sensitive observable based on transverse beam polarisation is introduced and the impact of neutralino mixing on the total cross section in cas of non-vanishing CP-violtating phases is studied in representative scenarios including non-GUT scenarios. Additionally, the mixing of sleptons is studied in the context of LFV, an analytical approximation is developed, and possible background free measurements of these effects are investigated.
This thesis is dedicated to a theoretical study of the 1-band Hubbard model in the strong coupling limit. The investigation is based on the Dynamical Cluster Approximation (DCA) which systematically restores non-local corrections to the Dynamical Mean Field approximation (DMFA). The DCA is formulated in momentum space and is characterised by a patching of the Brillouin zone where momentum conservation is only recovered between two patches. The approximation works well if k-space correlation functions show a weak momentum dependence. In order to study the temperature and doping dependence of the spin- and charge excitation spectra, we explicitly extend the Dynamical Cluster Approximation to two-particle response functions. The full irreducible two-particle vertex with three momenta and frequencies is approximated by an effective vertex dependent on the momentum and frequency of the spin and/or charge excitations. The effective vertex is calculated by using the Quantum Monte Carlo method on the finite cluster whereas the analytical continuation of dynamical quantities is performed by a stochastic version of the maximum entropy method. A comparison with high temperature auxiliary field quantum Monte Carlo data serves as a benchmark for our approach to two-particle correlation functions. Our method can reproduce basic characteristics of the spin- and charge excitation spectrum. Near and beyond optimal doping, our results provide a consistent overall picture of the interplay between charge, spin and single-particle excitations: a collective spin mode emerges at optimal doping and sufficiently low temperatures in the spin response spectrum and exhibits the energy scale of the magnetic exchange interaction J. Simultaneously, the low energy single-particle excitations are characterised by a coherent quasiparticle with bandwidth J. The origin of the quasiparticle can be quite well understood in a picture of a more or less antiferromagnetic ordered background in which holes are dressed by spin-excitations to allow for a coherent motion. By increasing doping, all features which are linked to the spin-polaron vanish in the single-particle as well as two-particle spin response spectrum. In the second part of the thesis an analysis of superconductivity in the Hubbard model is presented. The superconducting instability is implemented within the Dynamical Cluster Approximation by essentially allowing U(1) symmetry breaking baths in the QMC calculations for the cluster. The superconducting transition temperature T_c is derived from the d-wave order parameter which is directly estimated on the Monte Carlo cluster. The critical temperature T_c is in astonishing agreement with the temperature scale estimated by the divergence of the pair-field susceptibility in the paramagnetic phase. A detailed study of the pseudo and superconducting gap is continued by the investigation of the local and angle-resolved spectral function.
The Three-Site Higgsless Model is alternative implementation of electroweak symmetry breaking which in the Standard Model is mediated by the Higgs mechanism. The main features of this model is the appearance of two new heavy vector resonances W' and Z' with masses > 380 GeV as well as a set of new heavy fermions (> 1.8 TeV). In this model, unitarity of the amplitudes for the scattering of longitudinal gauge bosons is maintained by the exchange of the W' and Z' up to a scale of ~2 TeV. Consistency with the electroweak precision observables from the LEP / LEP-II experiments implies an exceedingly small coupling of the new vector bosons to the light Standard Model fermions (about 3% of the isospin gauge coupling). In this thesis, the LHC phenomenology of this scenario is explored. To this end, we calculated the couplings and widths of all the new particles and implemented the model into the Monte-Carlo eventgenerator WHIZARD / O'Mega. With this implementation, we simulated the parton-level production of the gauge boson and fermion partners in different channels possibly suitable for their discovery at the LHC. The results are presented together with an introduction to the model and a discussion of its properties. We find that, while the fermiophobic nature of the new heavy gauge bosons does make them intrinsically difficult to observe at a collider, the LHC should be able to establish the existence of both resonances and even give some hints about the properties of their couplings which would be a vital test of the consistency of such a scenario. For the heavy fermions, we find that their large mass is accompanied by relative widths of more than $10\%$, making them ill-suited for a direct discovery at the LHC. Nevertheless, our simulations reveal that there is a part of parameter space where, given enough time, patience and a good understanding of detector and backgrounds, a direct discovery might be possible.
Two-particle excitations, such as spin and charge excitations, play a key role in high-Tc cuprate superconductors (HTSC). Due to the antiferromagnetism of the parent compound the magnetic excitations are supposed to be directly related to the mechanism of superconductivity. In particular, the so-called resonance mode is a promising candidate for the pairing glue, a bosonic excitation mediating the electronic pairing. In addition, its interactions with itinerant electrons may be responsible for some of the observed properties of HTSC. Hence, getting to the bottom of the resonance mode is crucial for a deeper understanding of the cuprate materials . To analyze the corresponding two-particle correlation functions we develop in the present thesis a new, non-perturbative and parameter-free technique for T=0 which is based on the Variational Cluster Approach (VCA, an embedded cluster method for one-particle Green's functions). Guided by the spirit of the VCA we extract an effective electron-hole vertex from an isolated cluster and use a fully renormalized bubble susceptibility chi0 including the VCA one-particle propagators.Within our new approach, the magnetic excitations of HTSC are shown to be reproduced for the Hubbard model within the relevant strong-coupling regime. Exceptionally, the famous resonance mode occurring in the underdoped regime within the superconductivity-induced gap of spin-flip electron-hole excitations is obtained. Its intensity and hourglass dispersion are in good overall agreement with experiments. Furthermore, characteristic features such as the position in energy of the resonance mode and the difference of the imaginary part of the susceptibility in the superconducting and the normal states are in accord with Inelastic Neutron Scattering (INS) experiments. For the first time, a strongly-correlated parameter-free calculation revealed these salient magnetic properties supporting the S=1 magnetic exciton scenario for the resonance mode. Besides the INS data on magnetic properties further important new insights were gained recently via ARPES (Angle-Resolved Photoemission-Spectroscopy) and Raman experiments which disclosed a quite different doping dependence of the antinodal compared to the near-nodal gap. This thesis provides an approach to the Raman response similar to the magnetic case for inspecting this gap dichotomy. In agreement with experiments and one-particle data obtained in the VCA, we recover the antinodal gap decreasing and the near-nodal gap increasing as a function of doping. Hence, our results prove the Hubbard model to account for these salient gap features. In summary, we develop a two-particle cluster approach which is appropriate for the strongly-correlated regime and contains no free parameter. Our results obtained with this new approach combined with the phase diagram and the one-particle excitations obtained in the VCA strongly constitute a Hubbard model description of HTSC cuprate materials.
The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays mu -> e gamma, tau -> mu gamma and tau -> e gamma which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and to search for phenomenological implications. This provides a new perspective in model building since it allows us to screen models by its predictions on the theoretical and phenomenological side, i.e., we can apply further model constraints to single out a desired model. The results of our approach are, e.g., diverse lepton flavor and GUT models, a systematic scan of lepton flavor violation, new mass matrices, a new understanding of lepton mixing angles, a general extension of the idea of quark-lepton complementarity theta_12=pi/4-epsilon/sqrt{2} and for the first time the QLC relation in an SU(5) GUT.
This work focuses on a fundamental problem in modern numerical rela- tivity: Extracting gravitational waves in a coordinate and gauge independent way to nourish a unique and physically meaningful expression. We adopt a new procedure to extract the physically relevant quantities from the numerically evolved space-time. We introduce a general canonical form for the Weyl scalars in terms of fundamental space-time invariants, and demonstrate how this ap- proach supersedes the explicit definition of a particular null tetrad. As a second objective, we further characterize a particular sub-class of tetrads in the Newman-Penrose formalism: the transverse frames. We establish a new connection between the two major frames for wave extraction: namely the Gram-Schmidt frame, and the quasi-Kinnersley frame. Finally, we study how the expressions for the Weyl scalars depend on the tetrad we choose, in a space-time containing distorted black holes. We apply our newly developed method and demonstrate the advantage of our approach, compared with methods commonly used in numerical relativity.
20 years after the discovery of the Crab Nebula as a source of very high energy gamma-rays, the number of sources newly discovered above 100 GeV using ground-based Cherenkov telescopes has considerably grown, at the time of writing of this thesis to a total of 81. The sources are of different types, including galactic sources such as supernova remnants, pulsars, binary systems, or so-far unidentified accelerators and extragalactic sources such as blazars and radio galaxies. The goal of this thesis work was to search for gamma-ray emission from a particular type of blazars previously undetected at very high gamma-ray energies, by using the MAGIC telescope. Those blazars previously detected were all of the same type, the so-called high-peaked BL Lacertae objects. The sources emit purely non-thermal emission, and exhibit a peak in their radio-to-X-ray spectral energy distribution at X-ray energies. The entire blazar population extends from these rare, low-luminosity BL Lacertae objects with peaks at X-ray energies to the much more numerous, high-luminosity infrared-peaked radio quasars. Indeed, the low-peaked sources dominate the source counts obtained from space-borne observations at gamma-ray energies up to 10 GeV. Their spectra observed at lower gamma-ray energies show power-law extensions to higher energies, although theoretical models suggest them to turn over at energies below 100 GeV. This opened the quest for MAGIC as the Cherenkov telescope with the currently lowest energy threshold. In the framework of this thesis, the search was focused on the prominent sources BL Lac, W Comae and S5 0716+714, respectively. Two of the sources were unambiguously discovered at very high energy gamma-rays with the MAGIC telescope, based on the analysis of a total of about 150 hours worth of data collected between 2005 and 2008. The analysis of this very large data set required novel techniques for treating the effects of twilight conditions on the data quality. This was successfully achieved and resulted in a vastly improved performance of the MAGIC telescope in monitoring campaigns. The detections of low-peaked and intermediate-peaked BL Lac objects are in line with theoretical expectations, but push the models based on electron shock acceleration and inverse-Compton cooling to their limits. The short variability time scales of the order of one day observed at very high energies show that the gamma-rays originate rather close to the putative supermassive black holes in the centers of blazars, corresponding to less than 1000 Schwarzschild radii when taking into account relativistic bulk motion.
In the context of the indirect search for non-standard physics in the flavour sector of the Standard Model (SM), one of the most interesting processes is the rare inclusive B -> X_s gamma decay. On the one hand, being a flavour-changing neutral current, this B decay is sensitive to new physics, as it is loop-suppressed in the SM. On the other hand, it is only mildly affected by non-perturbative effects, and thus allows for precise theoretical predictions in the framework of renormalization-group improved perturbation theory. Accurate measurements as well as precise theoretical predictions with a good control over both perturbative and non-perturbative contributions have to be provided in order to derive stringent constraints on the parameter space of physics beyond the SM. On the experimental side, an outstanding accuracy in the measurement of the B -> Xs gamma decay rate has been achieved, which is mainly due the specialized experiments BaBar and Belle at the so-called B factories. To match the small experimental uncertainty, higher order computations within an effective low-energy theory of the SM are mandatory. In fact, next-to-next-to-leading order (NNLO) QCD corrections are required to provide a prediction for the decay rate with the same precision as the measurement. The NNLO evaluation of the B -> Xs gamma decay rate has been pursued by various groups over the last decade. The project was completed to a large extent and a first estimate at this level of perturbation theory was obtained in 2006. This prediction, however, lacks important contributions from yet unknown matrix elements, that were estimated from results which are only partially known to date. In this work, we provide a framework for the systematic study of the missing matrix elements at the NNLO. As main results of this thesis, we determine fermionic corrections to the charm quark mass dependent matrix elements of four-quark operators in the effective theory at NNLO. For the first time, the full mass dependence was kept. Moreover, we evaluate both bosonic and fermionic corrections to the decay rate in the limit of vanishing charm quark mass. These findings, combined with yet unknown remaining real contributions, will help to reduce the uncertainty of the NNLO branching ratio estimate considerably. Another central topic of the present work is the development of an automatic high-precision computation of multi-loop multi-scale integrals, a crucial ingredient for the here presented results.
Since its popularization due to Randall and Sundrum (RS) one decade ago, and in connection with the $AdS/CFT$ correspondence in particular, 5D warped background spacetime has been one of the most fruitful new ideas in physics beyond the standard model (SM), leading to new insights into symmetry breaking and the properties of strongly interacting theories inaccessible to direct perturbative calculations, while at the same time relating gravity to phenomenological model building. This has, among others, led to a renewed interest in models of electroweak symmetry breaking without physical scalar fields in the guise of so-called 'warped higgsless' models, which could provide an alternative to the famed Higgs mechanism of electroweak symmetry breaking which is part of the Standard Model of particle physics. However, little emphasis was put on reconciling these models with the strong evidence from astrophysical observations that one or several new, as yet unknown, stable particle species exist which form the cold dark matter content of the universe. The nature of dark matter and electroweak symmetry breaking are among the most prominent puzzles subject to experimental scrutiny at the Tevatron, direct search experiments, and in the near future at the LHC, which compels us the believe that both issues should be addressed together in any alternative scenario beyond the Standard Model. In this thesis we have investigated phenomenological implications which arise for cosmology and collider physics when the electroweak symmetry breaking sector of warped higgsless models is extended to include warped supersymmetry with conserved $R$ parity. The goal was to find the simplest supersymmetric extension of these models which still has a realistic light spectrum including a viable dark matter candidate. To accomplish this, we have used the same mechanism which is already at work for symmetry breaking in the electroweak sector to break supersymmetry as well, namely symmetry breaking by boundary conditions. While supersymmetry in five dimensions contains four supercharges and is therefore directly related to 4D $\mathcal{N}=2$ supersymmetry, half of them are broken by the background leaving us with ordinary $\mathcal{N}=1$ theory in the massless sector after Kaluza-Klein expansion. We thus use boundary conditions to model the effects of a breaking mechanism for the remaining two supercharges. The simplest viable scenario to investigate is a supersymmetric bulk and IR brane without supersymmetry on the UV brane. Even though parts of the light spectrum are effectively projected out by this mechanism, we retain the rich phenomenology of complete $\mathcal{N}=2$ supermultiplets in the Kaluza-Klein sector. While the light supersymmetric spectrum consists of electroweak gauginos which get their $\mathcal{O}(100\mbox{ GeV})$ masses from IR brane electroweak symmetry breaking, the light gluinos and squarks are projected out on the UV brane. The neutralinos, as mass eigenstates of the neutral bino-wino sector, are automatically the lightest gauginos, making them LSP dark matter candidates with a relic density that can be brought to agreement with WMAP measurements without extensive tuning of parameters. For chargino masses close to the experimental lower bounds at around $m_{\chi^+}\approx 100\dots 110$ GeV, the dark matter relic density points to LSP masses of around $m_\chi\approx 90$ GeV. At the LHC, the standard particle content of our model shares most of the key features of known warped higgsless models. We have performed Monte Carlo simulations of warped higgsless LSP and NLSP production at a benchmark point using \nameomega/\namewhizard, concentrating on $\ptmiss$ in association with third generation quarks. After background reduction cuts on the quark momenta and angles, we get hadronic cross sections of $\sigma>100\mbox{ fb}$ at $14\mbox{ TeV}$ with characteristic $\ptmiss$ distributions for $\chi\chi t\overline{t}$ final states, while the final states with $b\overline{b}$ pairs have much lower event rates and shapes which are hard to discern in experiments. Our results suggest that the discovery of warped higgsless LSP dark matter at the LHC via missing energy is within reach for the first few $\mbox{ fb}^{-1}$ at $14$ TeV if $b$ and in particular $t$ identification is reliable.
We study classical scalar field theories on noncommutative curved spacetimes. Following the approach of Wess et al. [Classical Quantum Gravity 22 (2005), 3511 and Classical Quantum Gravity 23 (2006), 1883], we describe noncommutative spacetimes by using (Abelian) Drinfel’d twists and the associated ?-products and ?-differential geometry. In particular, we allow for position dependent noncommutativity and do not restrict ourselves to the Moyal–Weyl deformation. We construct action functionals for real scalar fields on noncommutative curved spacetimes, and derive the corresponding deformed wave equations. We provide explicit examples of deformed Klein–Gordon operators for noncommutative Minkowski, de Sitter, Schwarzschild and Randall–Sundrum spacetimes, which solve the noncommutative Einstein equations. We study the construction of deformed Green’s functions and provide a diagrammatic approach for their perturbative calculation. The leading noncommutative corrections to the Green’s functions for our examples are derived.
This thesis is concerned with the statistical physics of various systems far from thermal equilibrium, focusing on universal critical properties, scaling laws and the role of fluctuations. To this end we study several models which serve as paradigmatic examples, such as surface growth and non-equilibrium wetting as well as phase transitions into absorbing states. As a particular interesting example of a model with a non-conventional scaling behavior, we study a simplified model for pulsed laser deposition by rate equations and Monte Carlo simulations. We consider a set of equations, where islands are assumed to be point-like, as well as an improved one that takes the size of the islands into account. The first set of equations is solved exactly but its predictive power is restricted to the first few pulses. The improved set of equations is integrated numerically, is in excellent agreement with simulations, and fully accounts for the crossover from continuous to pulsed deposition. Moreover, we analyze the scaling of the nucleation density and show numerical results indicating that a previously observed logarithmic scaling does not apply. In order to understand the impact of boundaries on critical phenomena, we introduce particle models displaying a boundary-induced absorbing state phase transition. These are one-dimensional systems consisting of a single site (the boundary) where creation and annihilation of particles occur, while particles move diffusively in the bulk. We study different versions of these models and confirm that, except for one exactly solvable bosonic variant exhibiting a discontinuous transition with trivial exponents, all the others display a non-trivial behavior, with critical exponents differing from their mean-field values, representing a universality class. We show that these systems are related to a $(0+1)$-dimensional non-Markovian model, meaning that in nonequilibrium a phase transition can take place even in zero dimensions, if time long-range interactions are considered. We argue that these models constitute the simplest universality class of phase transition into an absorbing state, because the transition is induced by the dynamics of a single site. Moreover, this universality class has a simple field theory, corresponding to a zero dimensional limit of direct percolation with L{\'e}vy flights in time. Another boundary phenomena occurs if a nonequilibrium growing interface is exposed to a substrate, in this case a nonequilibrium wetting transition may take place. This transition can be studied through Langevin equations or discrete growth models. In the first case, the Kardar-Parisi-Zhang equation, which defines a very robust universality class for nonequilibrium moving interfaces, is combined with a soft-wall potential. While in the second, microscopic models, in the corresponding universality class, with evaporation and deposition of particles in the presence of hard-wall are studied. Equilibrium wetting is related to a particular case of the problem, corresponding to the Edwards-Wilkinson equation with a potential in the continuum approach or to the fulfillment of detailed balance in the microscopic models. In this thesis we present the analytical and numerical methods used to investigate the problem and the very rich behavior that is observed with them. The entropy production for a Markov process with a nonequilibrium stationary state is expected to give a quantitative measure of the distance form equilibrium. In the final chapter of this thesis, we consider a Kardar-Parisi-Zhang interface and investigate how entropy production varies with the interface velocity and its dependence on the interface slope, which are quantities that characterize how far the stationary state of the interface is away from equilibrium. We obtain results in agreement with the idea that the entropy production gives a measure of the distance from equilibrium. Moreover we use the same model to study fluctuation relations. The fluctuation relation is a symmetry in the large deviation function associated to the probability of the variation of entropy during a fixed time interval. We argue that the entropy and height are similar quantities within the model we consider and we calculate the Legendre transform of the large deviation function associated to the height for small systems. We observe that there is no fluctuation relation for the height, nevertheless its large deviation function is still symmetric.
Im Rahmen dieser Arbeit wurde ein dreidimensionaler vollrelativistischer und parallelisierter Particle-in-Cell Code geschrieben, ausführlich getestet und angewandt. Der Code ACRONYM ist variabel einsetzbar und von der Genauigkeit und Stabilität her State-of-the-Art und somit konkurrenzfähig zu den sonstigen in der Astrophysik eingesetzten Codes anderer Gruppen. Die Energie bleibt bis auf einen Fehler von < 0.03% erhalten, die Divergenz des Magnetfeldes bleibt immer unter einem Wert von 10^{-12} und die Skalierung wurde mittlerweile bis zu einem Clustergröße von einigen 10000 CPUs getestet. In dieser Arbeit wurde dann, nach der Entwicklung des Codes, der Einfluss des fundamentalen Massenverhältnisses m_p/m_e auf die Teilchenbeschleunigung durch Plasmainstabilitäten untersucht. Dies ist relevant und wichtig, da in PiC-Simulationen in den allermeisten Fällen nicht mit dem realen Massenverhältnis gerechnet wird, da sonst viel zu viel Rechenleistung benötigt würde, um zu sehen, was mit den Protonen geschieht und was ihr Einfluss auf die leichten Teilchen wie Elektronen und Positronen ist. Zu diesem Zweck wurden Simulationen mit Massenverhältnissen zwischen m_p/m_e = 1.0 und 200.0 durchgeführt. Diese haben alle gemeinsam, dass periodische Randbedingungen verwendet wurden und das zur Verfügung stehende Simulationsgebiet mit jeweils zwei gegeneinander strömenden Plasmapopulationen vollständig gefüllt wurde, um jegliche Art von auftretenden Schocks auszuschließen. Die Rohdaten der einzelnen Simulationen wurden auf vielfältige Art und Weise analysiert, es wurden z.B. Schnitte durch die Teilchenverteilung erstellt, sowie ein- oder zweidimensionale Histogramme und Energieverläufe betrachtet. Dabei haben sich folgende Kernpunkte ergeben: Für Massenverhältnisse bis etwa m_p/m_e = 20 bildet sich die gesamte Zweistrom-Instabilität in nur einer Phase aus, das heißt, es bilden sich von ringförmigen Magnetfeldern umgebene Flussschläuche aus, die dann verschmelzen, bis nur noch zwei übrig sind und alle Teilchen werden über den gesamten Verlauf der Instabilität beschleunigt. Es ist damit zu folgern, dass die unterschiedlich schweren Teilchenspezies Protonen und Elektronen/Positronen durch die relativ nahe beieinander liegenden Massen noch so stark gekoppelt sind, dass sich nur eine Instabilität entwickeln kann. Bei großen Massenverhältnissen (m_p/m_e > 20) ist eine deutliche Trennung in zwei Phasen der Instabilität zu erkennen. Zuerst bilden sich wiederum Flussschläuche aus, diese verschmelzen miteinander (zu zweien oder mehr), bevor der erste Teil der Instabilität abflaut. Anschließend entstehen wieder ringförmige Magnetfelder und Flussschläuche, von denen einer meist deutlich stärker ist als all die anderen, das bedeutet, dass dieser von stärkeren Magnetfeldern umgeben ist und eine höhere Teilchendichte aufweist. Im Rahmen dieser zweigeteilten Instabilität werden die Elektronen und Positronen nur in der ersten Phase signifikant beschleunigt, die deutlich schwereren Protonen gewinnen über den gesamten Zeitraum Energie. Die höchstenergetischen Teilchen erreichen im Ruhesystem der jeweiligen Plasmapopulation Werte um gamma = 250. Man kann daraus für zukünftige Untersuchungen mit Hilfe von Particle-in-Cell Codes den Schluss ziehen, dass Rückschlüsse auf das tatsächliche Verhalten beim realen Massenverhältnis von m_p/m_e = 1836.2 nur aus den Simulationen mit m_p/m_e >> 20 gezogen werden können, da die starke Kopplung der leichten und schweren Teilchen bei kleineren Massenverhältnissen die Ergebnisse sehr stark beeinflusst. Es wurde anhand der gemessenen Zeitpunkte der Instabilitätsmaxima eine Extrapolation durchgeführt, die zeigt, dass die Instabilität beim realen Massenverhältnis etwa bei t = 1400 omega_{pe}^{-1} auftreten würde. Um dies wirklich zu simulieren müsste allerdings mehr als die 1000-fache Anzahl an CPU-Stunden aufgewandt werden. Des weiteren wurde eine Maxwell-Jüttner-Verteilung an die Teilchenverteilungen der einzelnen Simulationen auf dem Höhepunkt der Instabilität gefittet, um sowohl die neue Temperatur des Plasmas als auch die Beschleunigungseffizienz des Prozesses zu berechnen. Die Temperatur erhöht sich demnach durch die Instabilität von etwa 10^8K auf 10^{10} bis 10^{11}K, der Anteil suprathermischer Teilchen beträgt 2 bis 4%.
In this thesis we apply recently developed, as well as sophisticated quantum Monte Carlo methods to numerically investigate models of strongly correlated electron systems on honeycomb structures. The latter are of particular interest owing to their unique properties when simulating electrons on them, like the relativistic dispersion, strong quantum fluctuations and their resistance against instabilities. This work covers several projects including the advancement of the weak-coupling continuous time quantum Monte Carlo and its application to zero temperature and phonons, quantum phase transitions of valence bond solids in spin-1/2 Heisenberg systems using projector quantum Monte Carlo in the valence bond basis, and the magnetic field induced transition to a canted antiferromagnet of the Hubbard model on the honeycomb lattice. The emphasis lies on two projects investigating the phase diagram of the SU(2) and the SU(N)-symmetric Hubbard model on the hexagonal lattice. At sufficiently low temperatures, condensed-matter systems tend to develop order. An exception are quantum spin-liquids, where fluctuations prevent a transition to an ordered state down to the lowest temperatures. Previously elusive in experimentally relevant microscopic two-dimensional models, we show by means of large-scale quantum Monte Carlo simulations of the SU(2) Hubbard model on the honeycomb lattice, that a quantum spin-liquid emerges between the state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. This unexpected quantum-disordered state is found to be a short-range resonating valence bond liquid, akin to the one proposed for high temperature superconductors. Inspired by the rich phase diagrams of SU(N) models we study the SU(N)-symmetric Hubbard Heisenberg quantum antiferromagnet on the honeycomb lattice to investigate the reliability of 1/N corrections to large-N results by means of numerically exact QMC simulations. We study the melting of phases as correlations increase with decreasing N and determine whether the quantum spin liquid found in the SU(2) Hubbard model at intermediate coupling is a specific feature, or also exists in the unconstrained t-J model and higher symmetries.
One key scientific program of the MAGIC telescope project is the discovery and detection of blazars. They constitute the most prominent extragalactic source class in the very high energy (VHE) Gamma-ray regime with 29 out of 34 known objects (as of April 2010). Therefore a major part of the available observation time was spent in the last years on high-frequency peaked blazars. The selection criteria were chosen to increase the detection probability. As the X-ray flux is believed to be correlated to the VHE Gamma-ray flux, only X-ray selected sources with a flux F(X) > 2 μJy at 1 keV were considered. To avoid strong attenuation of the Gamma-rays in the extragalactic infrared background, the redshift was restricted to values between z < 0.15 and z < 0.4, depending on the declination of the objects. The latter determines the zenith distance during culmination which should not exceed 30° (for z < 0.4) and 45° (for z < 0.15), respectively. Between August 2005 and April 2009, a sample of 24 X-ray selected high-frequency peaked blazars has been observed with the MAGIC telescope. Three of them were detected including 1ES 1218+304 being the first high-frequency peaked BL Lacertae object (HBL) to be discovered with MAGIC in VHE Gamma-rays. One previously detected object was not confirmed as VHE emitter in this campaign by MAGIC. A set of 20 blazars previously not detected will be treated more closely in this work. In this campaign, during almost four years ~ 450 hrs or ~ 22% of the available observation time for extragalactic objects were dedicated to investigate the baseline emission of blazars and their broadband spectral properties in this emission state. For the sample of 20 objects in a redshift range of 0.018 < z < 0.361 integral flux upper limits in the VHE range on the 99.7% confidence level (corresponding to 3 standard deviations) were calculated resulting in values between 2.9% and 14.7% of the integral flux of the Crab Nebula. As the distribution of significances of the individual objects shows a clear shift to positive values, a stacking method was applied to the sample. For the whole set of 20 objects, an excess of Gamma-rays was found with a significance of 4.5 standard deviations in 349.5 hours of effective exposure time. For the first time a signal stacking in the VHE regime turned out to be successful. The measured integral flux from the cumulative signal corresponds to 1.4% of the Crab Nebula flux above 150 GeV with a spectral index α = −3.15±0.57. None of the objects showed any significant variability during the observation time and therefore the detected signal can be interpreted as the baseline emission of these objects. For the individual objects lower limits on the broad-band spectral indices αX−Gamma between the X-ray range at 1 keV and the VHE Gamma-ray regime at 200 GeV were calculated. The majority of objects show a spectral behaviour as expected from the source class of HBLs: The energy output in the VHE regime is in general lower than in X-rays. For the stacked blazar sample the broad-band spectral index was calculated to αX−Gamma = 1.09, confirming the result found for the individual objects. Another evidence for the revelation of the baseline emission is the broad-band spectral energy distribution (SED) comprising archival as well as contemporaneous multi-wavelength data from the radio to the VHE band. The SEDs of known VHE Gamma-ray sources in low flux states matches well the SED of the stacked blazar sample.
The standard model (SM) of particle physics is for the last three decades a very successful description of the properties and interactions of all known elementary particles. Currently, it is again probed with the first collisions at the Large Hadron Collider (LHC). It is widely expected that new physics will be detected at the LHC and the SM has to be extended. The most exhaustive analyzed extension of the SM is supersymmetry (SUSY). SUSY can not only solve intrinsic problems of the SM like the hierarchy problem, but it also postulates new particles which might explain the nature of dark matter in the universe. The majority of all studies about dark matter in the framework of SUSY has focused on the minimal supersymmetric standard model (MSSM). The aim of this work is to consider scenarios beyond that scope. We consider two models which explain not only dark matter but also neutrino masses: the gravitino as dark matter in gauge mediated SUSY breaking (GMSB) with bilinear broken $R$-parity as well as different seesaw scenarios with the neutralino as dark matter candidate. Furthermore, we also study the next-to-minimal supersymmetric standard model (NMSSM) which solves the \(\mu\)-problem of the MSSM and discuss the properties of the neutralino as dark matter candidate. In case of $R$-parity violation, light gravitinos are often the only remaining candidate for dark matter in SUSY because of their very long life time. We reconsider the cosmological gravitino problem arising for this kind of models. It will be shown that the proposed solution for the overclosure of the universe by light gravitinos, namely the entropy production by decays of GMSB messenger, just works in a small subset of models and in fine-tuned regions of the parameter space. This is a consequence of two effects so far overlooked: the enhanced decay channels in massive vector bosons and the impact of charged messenger particles. Both aspects cause an interplay between different cosmological restrictions which lead to strong constraints on the parameters of GMSB models. Afterwards, a minimal supergravity (mSugra) scenario with additional chiral superfields at high energy scales is considered. These fields are arranged in complete $SU(5)$ multiplets in order to maintain gauge unification. The new fields generate a dimension 5 operator to explain neutrino data. Furthermore, they cause large differences in mass spectrum of MSSM fields because of the different evaluation of the renormalization group equations what changes also the properties of the lightest neutralino as dark matter candidate. We discuss the parameter space of all three possible seesaw scenarios with respect to dark matter and the impact on rare lepton flavor violating processes. As we will see, especially in seesaw type~III but also in type~II the mass spectrum and regions of parameter space consistent with dark matter differ significantly in comparison to a common mSugra scenario. Moreover, the experimental bounds, in particular of branching ratios like \(l_i \rightarrow l_j \gamma\), cause large constraints on the seesaw parameters.
We apply an antiferromagnetic symmetry breaking implementation of the dynamical cluster approximation (DCA) to investigate the two-dimensional hole-doped Kondo lattice model (KLM) with hopping $t$ and coupling $J$. The DCA is an approximation at the level of the self-energy. Short range correlations on a small cluster, which is self-consistently embedded in the remaining bath electrons of the system, are handled exactly whereas longer ranged spacial correlations are incorporated on a mean-field level. The dynamics of the system, however, are retained in full. The strong temporal nature of correlations in the KLM make the model particularly suitable to investigation with the DCA. Our precise DCA calculations of single particle spectral functions compare well with exact lattice QMC results at the particle-hole symmetric point. However, our DCA version, combined with a QMC cluster solver, also allows simulations away from particle-hole symmetry and has enabled us to map out the magnetic phase diagram of the model as a function of doping and coupling $J/t$. At half-filling, our results show that the linear behaviour of the quasi-particle gap at small values of $J/t$ is a direct consequence of particle-hole symmetry, which leads to nesting of the Fermi surface. Breaking the symmetry, by inclusion of a diagonal hopping term, results in a greatly reduced gap which appears to follow a Kondo scale. Upon doping, the magnetic phase observed at half-filling survives and ultimately gives way to a paramagnetic phase. Across this magnetic order-disorder transition, we track the topology of the Fermi surface. The phase diagram is composed of three distinct regions: Paramagnetic with {\it large} Fermi surface, in which the magnetic moments are included in the Luttinger sum rule, lightly antiferromagnetic with large Fermi surface topology, and strongly antiferromagnetic with {\it small} Fermi surface, where the magnetic moments drop out of the Luttinger volume. We draw on a mean-field Hamiltonian with order parameters for both magnetisation and Kondo screening as a tool for interpretation of our DCA results. Initial results for fixed coupling and doping but varying temperature are also presented, where the aim is look for signals of the energy scales in the system: the Kondo temperature $T_{K}$ for initial Kondo screening of the magnetic moments, the Neel temperature $T_{N}$ for antiferromagnetic ordering, a possible $T^{*}$ at which a reordering of the Fermi surface is observed, and finally, the formation of the coherent heavy fermion state at $T_{coh}$.