Refine
Has Fulltext
- yes (246)
Is part of the Bibliography
- yes (246)
Year of publication
Document Type
- Doctoral Thesis (139)
- Journal article (105)
- Master Thesis (1)
- Other (1)
Keywords
- Monte-Carlo-Simulation (12)
- Supersymmetrie (12)
- Topologischer Isolator (12)
- topological insulators (10)
- Blazar (9)
- Aktiver galaktischer Kern (8)
- Elementarteilchenphysik (8)
- LHC (8)
- physics (8)
- Astrophysik (7)
Institute
- Institut für Theoretische Physik und Astrophysik (246) (remove)
Sonstige beteiligte Institutionen
In recent years many discoveries have been made that reveal a close relation between quantum information and geometry in the context of the AdS/CFT correspondence. In this duality between a conformal quantum field theory (CFT) and a theory of gravity on Anti-de Sitter spaces (AdS) quantum information quantities in CFT are associated with geometric objects in AdS. Subject of this thesis is the examination of this intriguing property of AdS/CFT. We study two central elements of quantum information: subregion complexity -- which is a measure for the effort required to construct a given reduced state -- and the modular Hamiltonian -- which is given by the logarithm of a considered reduced state.
While a clear definition for subregion complexity in terms of unitary gates exists for discrete systems, a rigorous formulation for quantum field theories is not known.
In AdS/CFT, subregion complexity is proposed to be related to certain codimension one regions on the AdS side.
The main focus of this thesis lies on the examination of such candidates for gravitational duals of subregion complexity.
We introduce the concept of \textit{topological complexity}, which considers subregion complexity to be given by the integral over the Ricci scalar of codimension one regions in AdS. The Gauss-Bonnet theorem provides very general expressions for the topological complexity of CFT\(_2\) states dual to global AdS\(_3\), BTZ black holes and conical defects. In particular, our calculations show that the topology of the considered codimension one bulk region plays an essential role for topological complexity.
Moreover, we study holographic subregion complexity (HSRC), which associates the volume of a particular codimension one bulk region with subregion complexity. We derive an explicit field theory expression for the HSRC of vacuum states. The formulation of HSRC in terms of field theory quantities may allow to investigate whether this bulk object indeed provides a concept of subregion complexity on the CFT side. In particular, if this turns out to be the case, our expression for HSRC may be seen as a field theory definition of subregion complexity. We extend our expression to states dual to BTZ black holes and conical defects.
A further focus of this thesis is the modular Hamiltonian of a family of states \(\rho_\lambda\) depending on a continuous parameter \(\lambda\). Here \(\lambda\) may be associated with the energy density or the temperature, for instance.
The importance of the modular Hamiltonian for quantum information is due to its contribution to relative entropy -- one of the very few objects in quantum information with a rigorous definition for quantum field theories.
The first order contribution in \(\tilde{\lambda}=\lambda-\lambda_0\) of the modular Hamiltonian to the relative entropy between \(\rho_\lambda\) and a reference state \(\rho_{\lambda_0}\) is provided by the first law of entanglement. We study under which circumstances higher order contributions in \(\tilde{\lambda}\) are to be expected.
We show that for states reduced to two entangling regions \(A\), \(B\) the modular Hamiltonian of at least one of these regions is expected to provide higher order contributions in \(\tilde{\lambda}\) to the relative entropy if \(A\) and \(B\) saturate the Araki-Lieb inequality. The statement of the Araki-Lieb inequality is that the difference between the entanglement entropies of \(A\) and \(B\) is always smaller or equal to the entanglement entropy of the union of \(A\) and \(B\).
Regions for which this inequality is saturated are referred to as entanglement plateaux. In AdS/CFT the relation between geometry and quantum information provides many examples for entanglement plateaux. We apply our result to several of them, including large intervals for states dual to BTZ black holes and annuli for states dual to black brane geometries.
The modular Hamiltonian of reduced states, given essentially by the logarithm of the reduced density matrix, plays an important role within the AdS/CFT correspondence in view of its relation to quantum information. In particular, it is an essential ingredient for quantum information measures of distances between states, such as the relative entropy and the Fisher information metric. However, the modular Hamiltonian is known explicitly only for a few examples. For a family of states rho(lambda) that is parametrized by a scalar lambda, the first order contribution in (lambda) over tilde = lambda-lambda(0) of the modular Hamiltonian to the relative entropy between rho(lambda) and a reference state rho(lambda 0) is completely determined by the entanglement entropy, via the first law of entanglement. For several examples, e.g. for ball-shaped regions in the ground state of CFTs, higher order contributions are known to vanish. In these cases the modular Hamiltonian contributes to the Fisher information metric in a trivial way. We investigate under which conditions the modular Hamiltonian provides a non-trivial contribution to the Fisher information metric, i.e. when the contribution of the modular Hamiltonian to the relative entropy is of higher order in (lambda) over tilde. We consider one-parameter families of reduced states on two entangling regions that form an entanglement plateau, i.e. the entanglement entropies of the two regions saturate the Araki-Lieb inequality. We show that in general, at least one of the relative entropies of the two entangling regions is expected to involve (lambda) over tilde contributions of higher order from the modular Hamiltonian. Furthermore, we consider the implications of this observation for prominent AdS/CFT examples that form entanglement plateaux in the large N limit.
We consider the computation of volumes contained in a spatial slice of AdS(3) in terms of observables in a dual CFT. Our main tool is kinematic space, defined either from the bulk perspective as the space of oriented bulk geodesics, or from the CFT perspective as the space of entangling intervals. We give an explicit formula for the volume of a general region in a spatial slice of AdS(3) as an integral over kinematic space. For the region lying below a geodesic, we show how to write this volume purely in terms of entangling entropies in the dual CFT. This expression is perhaps most interesting in light of the complexity = volume proposal, which posits that complexity of holographic quantum states is computed by bulk volumes. An extension of this idea proposes that the holographic subregion complexity of an interval, defined as the volume under its Ryu-Takayanagi surface, is a measure of the complexity of the corresponding reduced density matrix. If this is true, our results give an explicit relationship between entanglement and subregion complexity in CFT, at least in the vacuum. We further extend many of our results to conical defect and BTZ black hole geometries.
The idea that our observable Universe may have originated from a quantum tunneling event out of an eternally inflating false vacuum state is a cornerstone of the multiverse paradigm. Modern theories that are considered as an approach towards the ultraviolet-complete fundamental theory of particles and gravity, such as the various types of string theory, even suggest that a vast landscape of different vacuum configurations exists, and that gravitational tunneling is an important mechanism with which the Universe can explore this landscape. The tunneling scenario also presents a unique framework to address the initial conditions of our observable Universe. In particular, it allows to introduce deviations from the cosmological concordance model in a controlled and well-motivated way. These deviations are a central topic of this work. An important feature in most of the theories mentioned above is the presumed existence of additional space dimensions in excess of the three which we observe in our every-day experience. It was realized that these extra dimensions could avoid our detection if they are compactified to microscopic length scales far beyond the reach of current experiments. There also seem to be natural mechanisms available for dynamical compactification in those theories. These typically lead to a vast landscape of different vacuum configurations which also may differ in the number of macroscopic dimensions, only the total number of dimensions being determined by the theory. Transitions between these vacuum configurations may hence open up new directions which were previously compact, spontaneously compactify some previously macroscopic directions, or otherwise re-arrange the configuration of compact and macroscopic dimensions in a more general way. From within the bubble Universe, such a process may be perceived as an anisotropic background spacetime - intuitively, the dimensions which open up may give rise to preferred directions. If our 3+1 dimensional observable Universe was born in a process as described above, one may expect to find traces of a preferred direction in cosmological observations. For instance, two directions could be curved like on a sphere, while the third space direction is flat. Using a scenario of gravitational tunneling to fix the initial conditions, I show how the primordial signatures in such an anisotropic Universe can be obtained in principle and work out a particular example in more detail. A small deviation from isotropy also has phenomenological consequences for the later evolution of the Universe. I discuss the most important effects and show that backreaction can be dynamically important. In particular, under certain conditions, a buildup of anisotropic stress in different components of the cosmic fluid can lead to a dynamical isotropization of the total stress-energy tensor. The mechanism is again demonstrated with the help of a physical example.
The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and
A highly significant excess of high-energy astrophysical neutrinos has been reported by the IceCube Collaboration. Some features of the energy and declination distributions of IceCube events hint at a North/South asymmetry of the neutrino flux. This could be due to the presence of the bulk of our Galaxy in the Southern hemisphere. The ANTARES neutrino telescope, located in the Mediterranean Sea, has been taking data since 2007. It offers the best sensitivity to muon neutrinos produced by galactic cosmic ray interactions in this region of the sky. In this letter a search for an extended neutrino flux from the Galactic Ridge region is presented. Different models of neutrino production by cosmic ray propagation are tested. No excess of events is observed and upper limits for different neutrino flux spectral indices Γ are set. For Γ=2.4 the 90% confidence level flux upper limit at 100 TeV for one neutrino flavour corresponds to Φ\(^{1f}_{0}\) (100 TeV) = 2.0 · 10\(^{−17}\) GeV\(^{−1}\) cm\(^{−2}\)s\(^{−1}\)sr\(^{−1}\). Under this assumption, at most two events of the IceCube cosmic candidates can originate from the Galactic Ridge. A simple power-law extrapolation of the Fermi-LAT flux to account for IceCube High Energy Starting Events is excluded at 90% confidence level.
A search for high-energy neutrino emission correlated with gamma-ray bursts outside the electromagnetic prompt-emission time window is presented. Using a stacking approach of the time delays between reported gamma-ray burst alerts and spatially coincident muon-neutrino signatures, data from the Antares neutrino telescope recorded between 2007 and 2012 are analysed. One year of public data from the IceCube detector between 2008 and 2009 have been also investigated. The respective timing profiles are scanned for statistically significant accumulations within 40 days of the Gamma Ray Burst, as expected from Lorentz Invariance Violation effects and some astrophysical models. No significant excess over the expected accidental coincidence rate could be found in either of the two data sets. The average strength of the neutrino signal is found to be fainter than one detectable neutrino signal per hundred gamma-ray bursts in the Antares data at 90% confidence level.
A search for Secluded Dark Matter annihilation in the Sun using 2007-2012 data of the ANTARES neutrino telescope is presented. Three different cases are considered: a) detection of dimuons that result from the decay of the mediator, or neutrino detection from: b) mediator that decays into a dimuon and, in turn, into neutrinos, and c) mediator that decays directly into neutrinos. As no significant excess over background is observed, constraints are derived on the dark matter mass and the lifetime of the mediator.
A search for muon neutrinos originating from dark matter annihilations in the Sun is performed using the data recorded by the ANTARES neutrino telescope from 2007 to 2012. In order to obtain the best possible sensitivities to dark matter signals, an optimisation of the event selection criteria is performed taking into account the background of atmospheric muons, atmospheric neutrinos and the energy spectra of the expected neutrino signals. No significant excess over the background is observed and 90% C.L. upper limits on the neutrino flux, the spin-dependent and spin-independent WIMP-nucleon cross-sections are derived for WIMP masses ranging from 50 GeV to 5 TeV for the annihilation channels WIMP + WIMP→ b\(\overline{b}\), W\(^{+}\)W\(^{−}\) and τ\(^{+}\)τ\(^{−}\).
We consider the process of muon-electron elastic scattering, which has been proposed as an ideal framework to measure the running of the electromagnetic coupling constant at space-like momenta and determine the leading-order hadronic contribution to the muon g-2 (MUonE experiment). We compute the next-to-leading (NLO) contributions due to QED and purely weak corrections and implement them into a fully differential Monte Carlo event generator, which is available for first experimental studies. We show representative phenomenological results of interest for the MUonE experiment and examine in detail the impact of the various sources of radiative corrections under different selection criteria, in order to study the dependence of the NLO contributions on the applied cuts. The study represents the first step towards the realisation of a high-precision Monte Carlo code necessary for data analysis.
One of the main objectives of the ANTARES telescope is the search for point- like neutrino sources. Both the pointing accuracy and the angular resolution of the detector are important in this context and a reliableway to evaluate this performance is needed. In order to measure the pointing accuracy of the detector, one possibility is to study the shadow of the Moon, i. e. the deficit of the atmospheric muon flux from the direction of the Moon induced by the absorption of cosmic rays. Analysing the data taken between 2007 and 2016, theMoon shadow is observed with 3.5s statistical significance. The detector angular resolution for downwardgoing muons is 0.73. +/- 0.14.. The resulting pointing performance is consistent with the expectations. An independent check of the telescope pointing accuracy is realised with the data collected by a shower array detector onboard of a ship temporarily moving around the ANTARES location.
Despite its precise agreement with the experiment, the validity of the standard model (SM) of elementary particle physics is ensured only up to a scale of several hundred GeV so far. Even more, the inclusion of gravity into an unifying theory poses a problem which cannot be solved by ordinary quantum field theory (QFT). String theory, which is the most popular ansatz for a unified theory, predicts QFT on noncommutative space-time as a low energy limit. Nevertheless, independently of the motivation given by string theory, the nonlocality inherent to noncommutative QFT opens up the possibility for the inclusion of gravity. There are no theoretical predictions for the energy scale Lambda_NC at which noncommutative effects arise and it can be assumed to lie in the TeV range, which is the energy range probed by the next generation of colliders. Within this work we study the phenomenological consequences of a possible realization of QFT on noncommutative space-time relying on this assumption. The motivation for this thesis was given by the gap in the range of phenomenological studies of noncommutative effects in collider experiments, due to the absence in the literature of Large Hadron Collider (LHC) studies regarding noncommutative QFTs. In the first part we thus performed a phenomenological analysis of the hadronic process pp -> Z gamma -> l^+l^- gamma at the LHC and of electron-positron pair annihilation into a Z boson and a photon at the International Linear Collider (ILC). The noncommutative extension of the SM considered within this work relies on two building blocks: the Moyal-Weyl star-product of functions on ordinary space-time and the Seiberg-Witten maps. The latter relate the ordinary fields and parameters to their noncommutative counterparts such that ordinary gauge transformations induce noncommutative gauge transformations. This requirement is expressed by a set of inhomogeneous differential equations (the gauge equivalence equations) which are solved by the Seiberg-Witten maps order by order in the noncommutative parameter Theta. Thus, by means of the Moyal-Weyl star-product and the Seiberg-Witten maps a noncommutative extension of the SM as an effective theory as expansion in powers of Theta can be achieved, providing the framework of our phenomenological studies. A consequence of the noncommutativity of space-time is the violation of rotational invariance with respect to the beam axis. This effect shows up in the azimuthal dependence of cross sections, which is absent in the SM as well as in other models beyond the SM. Thus, the azimuthal dependence of the cross section is a typical signature of noncommutativity and can be used in order to discriminate it against other new physics effects. We have found this dependence to be best suited for deriving the sensitivity bounds on the noncommutative scale Lambda_NC. By studying pp -> Z gamma -> l^+l^- gamma to first order in the noncommutative parameter Theta, we show in the first part of this work that measurements at the LHC are sensitive to noncommutative effects only in certain cases, giving bounds on the noncommutative scale of Lambda_NC > 1.2 TeV. Our result improved the bounds present in the literature coming from past and present collider experiments by one order of magnitude. In order to explore the whole parameter range of the noncommutativity, ILC studies are required. By means of e^+e^- -> Z gamma -> l^+l^- gamma to first order in Theta we have shown that ILC measurements are complementary to LHC measurements of the noncommutative parameters. In addition, the bounds on Lambda_NC derived from the ILC are significantly higher and reach Lambda_NC > 6 TeV. The second part of this work arose from the necessity to enlarge the range of validity of our model towards higher energies. Thus, we expand the neutral current sector of the noncommutative SM to second order in $\theta$. We found that, against the general expectation, the theory must be enlarged by additional parameters. The new parameters enter the theory as ambiguities of the Seiberg-Witten maps. The latter are not uniquely determined and differ by homogeneous solutions of the gauge equivalence equations. The expectation was that the ambiguities correspond to field redefinitions and therefore should vanish in scattering matrix elements. However, we proved that this is not the case, and the ambiguities do affect physical observables. Our conjecture is, that every order in Theta will introduce new parameters to the theory. However, only the experiment can decide to what extent efforts with still higher orders in Theta are reasonable and will also give directions for the development of theoretical models of noncommutative QFTs.
We analyze the concomitant spontaneous breaking of translation and conformal symmetries by introducing in a CFT a complex scalar operator that acquires a spatially dependent expectation value. The model, inspired by the holographic Q-lattice, provides a privileged setup to study the emergence of phonons from a spontaneous translational symmetry breaking in a conformal field theory and offers valuable hints for the treatment of phonons in QFT at large. We first analyze the Ward identity structure by means of standard QFT techniques, considering both spontaneous and explicit symmetry breaking. Next, by implementing holographic renormalization, we show that the same set of Ward identities holds in the holographic Q-lattice. Eventually, relying on the holographic and QFT results, we study the correlators realizing the symmetry breaking pattern and how they encode information about the low-energy spectrum.
The Bateman functions and the allied Havelock functions were introduced as solutions of some problems in hydrodynamics about ninety years ago, but after a period of one or two decades they were practically neglected. In handbooks, the Bateman function is only mentioned as a particular case of the confluent hypergeometric function. In order to revive our knowledge on these functions, their basic properties (recurrence functional and differential relations, series, integrals and the Laplace transforms) are presented. Some new results are also included. Special attention is directed to the Bateman and Havelock functions with integer orders, to generalizations of these functions and to the Bateman-integral function known in the literature.
The adiabatic insertion of a \(\pi\) flux into a quantum spin Hall insulator gives rise to localized spin and charge fluxon states. We demonstrate that \(\pi\) fluxes can be used in exact quantum Monte Carlo simulations to identify a correlated \(Z_2\) topological insulator using the example of the Kane-Mele-Hubbard model. In the presence of repulsive interactions, a \(\pi\) flux gives rise to a Kramers doublet of spin-fluxon states with a Curie-law signature in the magnetic susceptibility. Electronic correlations also provide a bosonic mode of magnetic excitons with tunable energy that act as exchange particles and mediate a dynamical interaction of adjustable range and strength between spin fluxons. \(\pi\) fluxes can therefore be used to build models of interacting spins. This idea is applied to a three-spin ring and to one-dimensional spin chains. Because of the freedom to create almost arbitrary spin lattices, correlated topological insulators with \(\pi\) fluxes represent a novel kind of quantum simulator, potentially useful for numerical simulations and experiments.
Pinning the Order: The Nature of Quantum Criticality in the Hubbard Model on Honeycomb Lattice
(2013)
In numerical simulations, spontaneously broken symmetry is often detected by computing two-point correlation functions of the appropriate local order parameter. This approach, however, computes the square of the local order parameter, and so when it is small, very large system sizes at high precisions are required to obtain reliable results. Alternatively, one can pin the order by introducing a local symmetrybreaking field and then measure the induced local order parameter infinitely far from the pinning center. The method is tested here at length for the Hubbard model on honeycomb lattice, within the realm of the projective auxiliary-field quantum Monte Carlo algorithm. With our enhanced resolution, we find a direct and continuous quantum phase transition between the semimetallic and the insulating antiferromagnetic states with increase of the interaction. The single-particle gap, measured in units of Hubbard U, tracks the staggered magnetization. An excellent data collapse is obtained by finite-size scaling, with the values of the critical exponents in accord with the Gross-Neveu universality class of the transition.
The top quark plays an important role in current particle physics, from a theoretical point of view because of its uniquely large mass, but also experimentally because of the large number of top events recorded by the LHC experiments ATLAS and CMS, which makes it possible to directly measure the properties of this particle, for example its couplings to the other particles of the standard model (SM), with previously unknown precision. In this thesis, an effective field theory approach is employed to introduce a minimal and consistent parametrization of all anomalous top couplings to the SM gauge bosons and fermions which are compatible with the SM symmetries. In addition, several aspects and consequences of the underlying effective operator relations for these couplings are discussed. The resulting set of couplings has been implemented in the parton level Monte Carlo event generator WHIZARD in order to provide a tool for the quantitative assessment of the phenomenological implications at present and future colliders such as the LHC or a planned international linear collider. The phenomenological part of this thesis is focused on the charged current couplings of the top quark, namely anomalous contributions to the trilinear tbW coupling as well as quartic four-fermion contact interactions of the form tbff, both affecting single top production as well as top decays at the LHC. The study includes various aspects of inclusive cross section measurements as well as differential distributions of single tops produced in the t channel, bq → tq', and in the s channel, ud → tb. We discuss the parton level modelling of these processes as well as detector effects, and finally present the prospected LHC reach for setting limits on these couplings with 10 resp. 100 fb−1 of data recorded at √s = 14 TeV.
Die Untersuchung stark korrelierter Elektronensysteme anhand des zweidimensionalen Hubbard-Modells bildet das zentrale Thema dieser Arbeit. Wir analysieren das Schicksal des Mott-Isolators bei Dotierung als auch bei Reduzierung der Wechselwirkungsstärke. Die numerische Auswertung erfolgt mit Hilfe von Quanten-Cluster-Approximationen, die eine thermodynamisch konsistente Beschreibung der Grundzustandseigenschaften garantieren. Der hier verwendete Rahmen der Selbstenergiefunktional-Theorie bietet eine große Flexibilität bei der Konstruktion von Cluster-Näherungen. Eine detaillierte Analyse gibt Aufschluss über die Qualität und das Konvergenzverhalten unterschiedlicher Cluster-Näherungen innerhalb der Selbstenergiefunktional-Theorie. Wir verwenden für diese Untersuchungen das eindimensionale Hubbard-Modell und vergleichen unsere Resultate mit der exakten Lösung. In zwei Dimensionen finden wir als Grundzustand des Teilchen-Loch-symmetrischen Modells bei Halbfüllung einen antiferromagnetischen Isolator unabhängig von der Wechselwirkungsstärke. Die Berücksichtigung kurzreichweitiger räumlicher Korrelationen durch unsere Cluster-Näherung führt, im Vergleich mit der dynamischen Mean-Field-Theorie, zu einer deutlichen Verbesserung des antiferromagnetischen Ordnungsparameters. Darüberhinaus beobachten wir in der paramagnetischen Phase einen Metall-Isolator-Übergang als Funktion der Wechselwirkungsstärke, der sich qualitativ vom reinen Mean-Field-Szenario unterscheidet. Ausgehend vom antiferromagnetischen Mott-Isolator zeigt sich ein füllungsgetriebener Metall-Isolator-Übergang in eine paramagnetische metallische Phase. Abhängig von der verwendeten Cluster-Approximation tritt dabei zunächst eine antiferromagnetische metallische Phase auf. Neben langreichweitiger antiferromagnetischer Ordnung haben wir in unseren Rechnungen auch Supraleitung berücksichtigt. Das Verhalten des supraleitenden Ordnungsparameters als Funktion der Dotierung ist dabei in guter Übereinstimmung sowohl mit anderen numerischen Verfahren als auch mit experimentellen Ergebnissen.
Explaining the baryon asymmetry of the Universe has been a long-standing problem of particle physics, with the consensus being that new physics is required as the Standard Model (SM) cannot resolve this issue. Beyond the Standard Model (BSM) scenarios would need to incorporate new sources of \(CP\) violation and either introduce new departures from thermal equilibrium or modify the existing electroweak phase transition. In this thesis, we explore two approaches to baryogenesis, i.e. the generation of this asymmetry.
In the first approach, we study the two-particle irreducible (2PI) formalism as a means to investigate non-equilibrium phenomena. After arriving at the renormalised equations of motions (EOMs) to describe the dynamics of a phase transition, we discuss the techniques required to obtain the various counterterms in an on-shell scheme. To this end, we consider three truncations up to two-loop order of the 2PI effective action: the Hartree approximation, the scalar sunset approximation and the fermionic sunset approximation. We then reconsider the renormalisation procedure in an \(\overline{\text{MS}}\) scheme to evaluate the 2PI effective potential for the aforementioned truncations. In the Hartree and the scalar sunset approximations, we obtain analytic expressions for the various counterterms and subsequently calculate the effective potential by piecing together the finite contributions. For the fermionic sunset approximation, we obtain similar equations for the counterterms in terms of divergent parts of loop integrals. However, these integrals cannot be expressed in an analytic form, making it impossible to evaluate the 2PI effective potential with the fermionic contribution. Our main results are thus related to the renormalisation programme in the 2PI formalism: \( (i) \)the procedure to obtain the renormalised EOMs, now including fermions, which serve as the starting point for the transport equations for electroweak baryogenesis and \( (ii) \) the method to obtain the 2PI effective potential in a transparent manner.
In the second approach, we study baryogenesis via leptogenesis. Here, an asymmetry in the lepton sector is generated, which is then converted into the baryon asymmetry via the sphaleron process in the SM. We proceed to consider an extension of the SM along the lines of a scotogenic framework. The newly introduced particles are charged odd under a \(\mathbb{Z}_2\) symmetry, and masses for the SM neutrinos are generated radiatively. The \(\mathbb{Z}_2\) symmetry results in the lightest BSM particle being stable, allowing for a suitable dark matter (DM) candidate. Furthermore, the newly introduced heavy Majorana fermionic singlets provide the necessary sources of \(CP\) violation through their Yukawa interactions and their out-of-equilibrium decays produce a lepton asymmetry. This model is constrained from a wide range of observables, such as consistency with neutrino oscillation data, limits on branching ratios of charged lepton flavour violating decays, electroweak observables and obtaining the observed DM relic density. We study leptogenesis in this model in light of the results of a Markov chain Monte Carlo scan, implemented in consideration of the aforementioned constraints. Successful leptogenesis in this model, to account for the baryon asymmetry, then severely constrains the available parameter space.
This thesis aims at a description of the equilibrium dynamics of quantum spin glass systems. To this end a generic fermionic SU(2), spin 1/2 spin glass model with infinite-range interactions is defined in the first part. The model is treated in the framework of imaginary-time Grassmann field theory along with the replica formalism. A dynamical two-step decoupling procedure, which retains the full time dependence of the (replica-symmetric) saddle point, is presented. As a main result, a set of highly coupled self-consistency equations for the spin-spin correlations can be formulated. Beyond the so-called spin-static approximation two complementary systematic approximation schemes are developed in order to render the occurring integration problem feasible. One of these methods restricts the quantum-spin dynamics to a manageable number of bosonic Matsubara frequencies. A sequence of improved approximants to some quantity can be obtained by gradually extending the set of employed discrete frequencies. Extrapolation of such a sequence yields an estimate of the full dynamical solution. The other method is based on a perturbative expansion of the self-consistency equations in terms of the dynamical correlations. In the second part these techniques are applied to the isotropic Heisenberg spin glass both on the Fock space (HSGF) and, exploiting the Popov-Fedotov trick, on the spin space (HSGS). The critical temperatures of the paramagnet to spin glass phase transitions are determined accurately. Compared to the spin-static results, the dynamics causes slight increases of T_c by about 3% and 2%, respectively. For the HSGS the specific heat C(T) is investigated in the paramagnetic phase and, by way of a perturbative method, below but close to T_c. The exact C(T)-curve is shown to exhibit a pronounced non-analyticity at T_c and, contradictory to recent reports by other authors, there is no indication of maximum above T_c. In the last part of this thesis the spin glass model is augmented with a nearest-neighbor hopping term on an infinite-dimensional cubic lattice. An extended self-consistency structure can be derived by combining the decoupling procedure with the dynamical CPA method. For the itinerant Ising spin glass numerous solutions within the spin-static approximation are presented both at finite and zero temperature. Systematic dynamical corrections to the spin-static phase diagram in the plane of temperature and hopping strength are calculated, and the location of the quantum critical point is determined.
We perform global fits to the parameters of the Constrained Minimal Super-symmetric Standard Model (CMSSM) and to a variant with non-universal Higgs masses (NUHM1). In addition to constraints from low-energy precision observables and the cosmological dark matter density, we take into account the LHC exclusions from searches in jets plus missing transverse energy signatures with about 5 fb\(^{−1}\) of integrated luminosity. We also include the most recent upper bound on the branching ratio B\(_s\) → μμ from LHCb. Furthermore, constraints from and implications for direct and indirect dark matter searches are discussed. The best fit of the CMSSM prefers a light Higgs boson just above the experimentally excluded mass. We find that the description of the low-energy observables, (g − 2)\(_μ\) in particular, and the non-observation of SUSY at the LHC become more and more incompatible within the CMSSM. A potential SM-like Higgs boson with mass around 126 GeV can barely be accommodated. Values for B(B\(_s\)→μμ) just around the Standard Model prediction are naturally expected in the best fit region. The most-preferred region is not yet affected by limits on direct WIMP searches, but the next generation of experiments will probe this region. Finally, we discuss implications from fine-tuning for the best fit regions.
Killing the cMSSM softly
(2016)
We investigate the constrained Minimal Supersymmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global fit using the program Fittino. As in previous studies, we find rather poor agreement of the best fit point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best fit point. We find that the vacuum is metastable, with a lifetime significantly longer than the age of the Universe. For the first time in a global fit of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-fit of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Finally, for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % confidence level.
Single-molecule super-resolution microscopy (SMLM) techniques like dSTORM can reveal biological structures down to the nanometer scale. The achievable resolution is not only defined by the localization precision of individual fluorescent molecules, but also by their density, which becomes a limiting factor e.g., in expansion microscopy. Artificial deep neural networks can learn to reconstruct dense super-resolved structures such as microtubules from a sparse, noisy set of data points. This approach requires a robust method to assess the quality of a predicted density image and to quantitatively compare it to a ground truth image. Such a quality measure needs to be differentiable to be applied as loss function in deep learning. We developed a new trainable quality measure based on Fourier Ring Correlation (FRC) and used it to train deep neural networks to map a small number of sampling points to an underlying density. Smooth ground truth images of microtubules were generated from localization coordinates using an anisotropic Gaussian kernel density estimator. We show that the FRC criterion ideally complements the existing state-of-the-art multiscale structural similarity index, since both are interpretable and there is no trade-off between them during optimization. The TensorFlow implementation of our FRC metric can easily be integrated into existing deep learning workflows.
In this thesis, we investigate aspects of the physics of heavy-fermion systems and correlated topological insulators.
We numerically solve the interacting Hamiltonians that model the physical systems using quantum Monte Carlo algorithms
to access both ground-state and finite-temperature observables.
Initially, we focus on the metamagnetic transition in the Kondo lattice model for heavy fermions.
On the basis of the dynamical mean-field theory and the dynamical cluster approximation,
our calculations point towards a continuous transition, where the signatures of metamagnetism are linked to a Lifshitz transition of heavy-fermion bands.
In the second part of the thesis, we study various aspects of magnetic pi fluxes in the Kane-Mele-Hubbard model of a correlated topological insulator.
We describe a numerical measurement of the topological index, based on the localized mid-gap states that are provided by pi flux insertions.
Furthermore, we take advantage of the intrinsic spin degree of freedom of a pi flux to devise instances of interacting quantum spin systems.
In the third part of the thesis, we introduce and characterize the Kane-Mele-Hubbard model on the pi flux honeycomb lattice.
We place particular emphasis on the correlations effects along the one-dimensional boundary of the lattice and
compare results from a bosonization study with finite-size quantum Monte Carlo simulations.
20 years after the discovery of the Crab Nebula as a source of very high energy gamma-rays, the number of sources newly discovered above 100 GeV using ground-based Cherenkov telescopes has considerably grown, at the time of writing of this thesis to a total of 81. The sources are of different types, including galactic sources such as supernova remnants, pulsars, binary systems, or so-far unidentified accelerators and extragalactic sources such as blazars and radio galaxies. The goal of this thesis work was to search for gamma-ray emission from a particular type of blazars previously undetected at very high gamma-ray energies, by using the MAGIC telescope. Those blazars previously detected were all of the same type, the so-called high-peaked BL Lacertae objects. The sources emit purely non-thermal emission, and exhibit a peak in their radio-to-X-ray spectral energy distribution at X-ray energies. The entire blazar population extends from these rare, low-luminosity BL Lacertae objects with peaks at X-ray energies to the much more numerous, high-luminosity infrared-peaked radio quasars. Indeed, the low-peaked sources dominate the source counts obtained from space-borne observations at gamma-ray energies up to 10 GeV. Their spectra observed at lower gamma-ray energies show power-law extensions to higher energies, although theoretical models suggest them to turn over at energies below 100 GeV. This opened the quest for MAGIC as the Cherenkov telescope with the currently lowest energy threshold. In the framework of this thesis, the search was focused on the prominent sources BL Lac, W Comae and S5 0716+714, respectively. Two of the sources were unambiguously discovered at very high energy gamma-rays with the MAGIC telescope, based on the analysis of a total of about 150 hours worth of data collected between 2005 and 2008. The analysis of this very large data set required novel techniques for treating the effects of twilight conditions on the data quality. This was successfully achieved and resulted in a vastly improved performance of the MAGIC telescope in monitoring campaigns. The detections of low-peaked and intermediate-peaked BL Lac objects are in line with theoretical expectations, but push the models based on electron shock acceleration and inverse-Compton cooling to their limits. The short variability time scales of the order of one day observed at very high energies show that the gamma-rays originate rather close to the putative supermassive black holes in the centers of blazars, corresponding to less than 1000 Schwarzschild radii when taking into account relativistic bulk motion.
We analyze a variety of integration schemes for the momentum space functional renormalization group calculation with the goal of finding an optimized scheme. Using the square lattice t-t' Hubbard model as a testbed we define and benchmark the quality. Most notably we define an error estimate of the solution for the ordinary differential equation circumventing the issues introduced by the divergences at the end of the FRG flow. Using this measure to control for accuracy we find a threefold reduction in number of required integration steps achievable by choice of integrator. We herewith publish a set of recommended choices for the functional renormalization group, shown to decrease the computational cost for FRG calculations and representing a valuable basis for further investigations.
In this thesis we consider the hybrid quantum Monte Carlo method for simulations of the Hubbard and Su-Schrieffer-Heeger model. In the first instance, we discuss the hybrid quantum Monte Carlo method for the Hubbard model on a square lattice. We point out potential ergodicity issues and provide a way to circumvent them by a complexification of the method. Furthermore, we compare the efficiency of the hybrid quantum Monte Carlo method with a well established determinantal quantum Monte Carlo method for simulations of the half-filled Hubbard model on square lattices. One reason why the hybrid quantum Monte Carlo method loses the comparison is that we do not observe the desired sub-quadratic scaling of the numerical effort. Afterwards we present a formulation of the hybrid quantum Monte Carlo method for the Su-Schrieffer-Heeger model in two dimensions. Electron-phonon models like this are in general very hard to simulate using other Monte Carlo methods in more than one dimensions. It turns out that the hybrid quantum Monte Carlo method is much better suited for this model . We achieve favorable scaling properties and provide a proof of concept. Subsequently, we use the hybrid quantum Monte Carlo method to investigate the Su-Schrieffer-Heeger model in detail at half-filling in two dimensions. We present numerical data for staggered valence bond order at small phonon frequencies and an antiferromagnetic order at high frequencies. Due to an O(4) symmetry the antiferromagnetic order is connected to a superconducting charge density wave. Considering the Su-Schrieffer-Heeger model without tight-binding hopping reveals an additional unconstrained Z_2 gauge theory. In this case, we find indications for π-fluxes and a possible Z_2 Dirac deconfined phase as well as for a columnar valence bond ordered state at low phonon energies. In our investigations of the several phase transitions we discuss the different possibilities for the underlying mechanisms and reveal first insights into a rich phase diagram.
Next-to-leading-order electroweak corrections to pp -> W\(^{+}\)W\(^{-}\) -> 4 leptons at the LHC
(2016)
We present results of the first calculation of next-to-leading-order electroweak corrections to W-boson pair production at the LHC that fully takes into account leptonic W-boson decays and off-shell effects. Employing realistic event selections, we discuss the corrections in situations that are typical for the study of W-boson pairs as a signal process or of Higgs-boson decays H → WW∗, to which W-boson pair production represents an irreducible background. In particular, we compare the full off-shell results, obtained treating the W-boson resonances in the complex-mass scheme, to previous results in the so-called double-pole approximation, which is based on an expansion of the loop amplitudes about the W resonance poles. At small and intermediate scales, i.e. in particular in angular and rapidity distributions, the two approaches show the expected agreement at the level of fractions of a percent, but larger differences appear in the TeV range. For transverse-momentum distributions, the differences can even exceed the 10% level in the TeV range where “background diagrams” with one instead of two resonant W bosons gain in importance because of recoil effects.
Next-to-leading-order electroweak corrections to the production of four charged leptons at the LHC
(2017)
We present a state-of-the-art calculation of the next-to leading-order electroweak corrections to ZZ production, including the leptonic decays of the Z bosons into μ\(^+\)μ\(^ −\)e\(^+\)e\(^−\) or μ\(^+\)μ\(^−\)μ\(^+\)μ\(^−\) final states. We use complete leading-order and next-to-leading-order matrix elements for four-lepton production, including contributions of virtual photons and all off-shell effects of Z bosons, where the finite Z-boson width is taken into account using the complex-mass scheme. The matrix elements are implemented into Monte Carlo programs allowing for the evaluation of arbitrary differential distributions. We present integrated and differential cross sections for the LHC at 13 TeV both for an inclusive setup where only lepton identification cuts are applied, and for a setup motivated by Higgs-boson analyses in the four-lepton decay channel. The electroweak corrections are divided into photonic and purely weak contributions. The former show the well-known pronounced tails near kinematical thresholds and resonances; the latter are generically at the level of ∼ −5% and reach several −10% in the high-energy tails of distributions. Comparing the results for μ\(^+\)μ\(^−\)e\(^+\)e\(^−\) and μ\(^+\)μ\(^−\)μ\(^+\)μ\(^−\) final states, we find significant differences mainly in distributions that are sensitive to the μ\(^+\)μ\(^−\) pairing in the μ\(^+\)μ\(^−\)μ\(^+\)μ\(^−\) final state. Differences between μ\(^+\)μ\(^−\)e\(^+\)e\(^−\) and μ\(^+\)μ\(^−\)μ\(^+\)μ\(^−\) channels due to interferences of equal-flavour leptons in the final state can reach up to 10% in off-shell-sensitive regions. Contributions induced by incoming photons, i.e. photon-photon and quark-photon channels, are included, but turn out to be phenomenologically unimportant.
The production of a neutral and a charged vector boson with subsequent decays into three charged leptons and a neutrino is a very important process for precision tests of the Standard Model of elementary particles and in searches for anomalous triple-gauge-boson couplings. In this article, the first computation of next-to-leading-order electroweak corrections to the production of the four-lepton final states μ\(^{+}\)μ\(^{−}\)e\(^{+}\)ν\(_{e}\), μ\(^{+}\)μ\(^{−}\)e\(^{−}\)ν\(_{e}\), μ\(^{+}\)μ\(^{−}\)μ\(^{+}\)ν\(_{μ}\), and μ\(^{+}\)μ\(^{−}\)μ\(^{−}\)ν\(_{μ}\) at the Large Hadron Collider is presented. We use the complete matrix elements at leading and next-to-leading order, including all off-shell effects of intermediate massive vector bosons and virtual photons. The relative electroweak corrections to the fiducial cross sections from quark-induced partonic processes vary between −3% and −6%, depending significantly on the event selection. At the level of differential distributions, we observe large negative corrections of up to −30% in the high-energy tails of distributions originating from electroweak Sudakov logarithms. Photon-induced contributions at next-to-leading order raise the leading-order fiducial cross section by +2%. Interference effects in final states with equal-flavour leptons are at the permille level for the fiducial cross section, but can lead to sizeable effects in off-shell sensitive phase-space regions.
Complete NLO corrections to W\(^{+}\)W\(^{+}\) scattering and its irreducible background at the LHC
(2017)
The process pp → μ\(^{+}\)ν\(_{μ}\)e\(^{+}\)ν\(_{e}\)jj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ\(^{+}\)ν\(_{μ}\)e\(^{+}\)ν\(_{e}\)jj final state should be preferably measured together.
Atomically thin semiconductors from the transition metal dichalcogenide family are materials in which the optical response is dominated by strongly bound excitonic complexes. Here, we present a theory of excitons in two-dimensional semiconductors using a tight-binding model of the electronic structure. In the first part, we review extensive literature on 2D van der Waals materials, with particular focus on their optical response from both experimental and theoretical points of view. In the second part, we discuss our ab initio calculations of the electronic structure of MoS\(_2\), representative of a wide class of materials, and review our minimal tight-binding model, which reproduces low-energy physics around the Fermi level and, at the same time, allows for the understanding of their electronic structure. Next, we describe how electron-hole pair excitations from the mean-field-level ground state are constructed. The electron–electron interactions mix the electron-hole pair excitations, resulting in excitonic wave functions and energies obtained by solving the Bethe–Salpeter equation. This is enabled by the efficient computation of the Coulomb matrix elements optimized for two-dimensional crystals. Next, we discuss non-local screening in various geometries usually used in experiments. We conclude with a discussion of the fine structure and excited excitonic spectra. In particular, we discuss the effect of band nesting on the exciton fine structure; Coulomb interactions; and the topology of the wave functions, screening and dielectric environment. Finally, we follow by adding another layer and discuss excitons in heterostructures built from two-dimensional semiconductors.
Chromium dioxide CrO\(_2\) belongs to a class of materials called ferromagnetic half-metals, whose peculiar aspect is that they act as a metal in one spin orientation and as a semiconductor or insulator in the opposite one. Despite numerous experimental and theoretical studies motivated by technologically important applications of this material in spintronics, its fundamental properties such as momentumresolved electron dispersions and the Fermi surface have so far remained experimentally inaccessible because of metastability of its surface, which instantly reduces to amorphous Cr\(_2\)O\(_3\). In this work, we demonstrate that direct access to the native electronic structure of CrO\(_2\) can be achieved with soft-x-ray angle-resolved photoemission spectroscopy whose large probing depth penetrates through the Cr\(_2\)O\(_3\) layer. For the first time, the electronic dispersions and Fermi surface of CrO\(_2\) are measured, which are fundamental prerequisites to solve the long debate on the nature of electronic correlations in this material. Since density functional theory augmented by a relatively weak local Coulomb repulsion gives an exhaustive description of our spectroscopic data, we rule out strong-coupling theories of CrO\(_2\). Crucial for the correct interpretation of our experimental data in terms of the valence-band dispersions is the understanding of a nontrivial spectral response of CrO\(_2\) caused by interference effects in the photoemission process originating from the nonsymmorphic space group of the rutile crystal structure of CrO\(_2\).
Classical novae are thermonuclear explosions occurring on the surface of white dwarfs.
When co-existing in a binary system with a main sequence or more evolved star, mass
accretion from the companion star to the white dwarf can take place if the companion
overflows its Roche lobe. The envelope of hydrogen-rich matter which builds on
top of the white dwarf eventually ignites under degenerate conditions, leading to
a thermonuclear runaway and an explosion in the order of 1046 erg, while leaving
the white dwarf intact. Spectral analyses from the debris indicate an abundance of
isotopes that are tracers of nuclear burning via the hot CNO cycle, which in turn
reveal some sort of mixing between the envelope and the white dwarf underneath.
The exact mechanism is still a matter of debate.
The convection and deflagration in novae develop in the low Mach number regime.
We used the Seven League Hydro code (SLH ), which employs numerical schemes
designed to correctly simulate low Mach number flows, to perform two and three-
dimensional simulations of classical novae. Based on a spherically-symmetric model
created with aid of a stellar evolution code, we developed our own nova model and
tested it on a variety of numerical grids and boundary conditions for validation. We
focused on the evolution of temperature, density and nuclear energy generation rate at
the layers between white dwarf and envelope, where most of the energy is generated,
to understand the structure of the transition region, and its effect on the nuclear
burning. We analyzed the resulting dredge-up efficiency stemming from the convective
motions in the envelope. Our models yield similar results to the literature, but seem
to depend very strongly on the numerical resolution. We followed the evolution of
the nuclear species involved in the CNO cycle and concluded that the thermonuclear
reactions primarily taking place are those of the cold and not the hot CNO cycle.
The reason behind this could be that under the conditions generally assumed for
multi-dimensional simulations, the envelope is in fact not degenerate. We performed
initial tests for 3D simulations and realized that alternative boundary conditions are
needed.
The nature of dark matter and the origin of the baryon asymmetry are two of the deepest mysteries of modern particle physics. In the absence of hints regarding a possible solution to these mysteries, many approaches have been developed to tackle them simultaneously leading to very diverse and rich models. We give a short review where we describe the general features of some of these models and an overview on the general problem. We also propose a diagrammatic notation to label the different models.
Bis heute ist nicht bekannt, in welcher Umgebung die schwersten Elemente durch Neutroneneinfangprozesse entstehen. Es gibt zwei mögliche Szenarien, die in der Literatur diskutiert werden: Supernova-Explosionen und Neutronensternverschmelzungen. Beide tragen zur Elementproduktion bei. Welches Szenario aber die dominierende Umgebung ist, bleibt umstritten. Mehrere Fakten sprechen für Supernova-Explosionen als Entstehungsorte: Wenn ein massereicher Stern kollabiert und anschließend explodiert, sind die Temperatur und die Dichte so hoch, dass Neutronen von den bereits bestehenden Elementen eingefangen und angelagert werden können. Obwohl in Simulationen mit kugelsymmetrischen Modellen nur protonen- reiche Auswürfe entstehen, kann es in asymmetrischen Explosionen aufgrund der Rotation und der Magnetfelder vermutlich zu einem neutronenreichen Auswurf kommen. Dieser ist hoch genug, dass der schnelle Neutroneneinfang auftreten kann. In dieser Arbeit habe ich daher die Überreste solcher Explosionen untersucht, um nach Asymmetrien und ihren möglichen Auswirkungen auf die Element-Entstehung und Verteilung zu suchen. Dafür wurden die beiden Supernova-Überreste CTB 109 und RCW 103 ausgewählt. CTB 109 besitzt im Zentrum einen anomale Röntgenpulsar, also einen Neutronenstern mit hohem Magnetfeld und starker Rotation, die durch Asymmetrien hervorgerufen worden sein könnten. Auch RCW 103 hat vermutlich einen solchen Pulsar als zentrale Quelle. Beide Überreste sind noch recht jung und befinden sich in ihrer Sedov-Taylor Phase. Die Distanz zur Erde beträgt für beide Überreste ungefähr 3 kpc, womit sie in der näheren Umgebung der Erde zu finden sind. Die Elemente bis zur Eisengruppe haben ihre bekanntesten Linien im Bereich der Röntgenstrahlung. Deswegen wurden für diese Arbeit archivierte Daten des Satelliten XMM-Newton ausgewählt und die Spektren in definierten Regionen in den bei- den Supernova-Überresten mit den EPIC MOS-Kameras ausgewertet. Die heutigen Röntgensatelliten haben jedoch keine ausreichende Sensitivität, um die schwersten Elemente zu detektieren. In den Spektren der beiden Überreste wurden deshalb vorwiegend die Elemente Silizium und Magnesium gefunden, in CTB 109 auch Neon. Elemente mit höheren Massezahlen konnten leider nicht signifikant aus dem Hintergrund herausgefiltert werden. Deutlich sind die Peaks der drei Elementen sichtbar, aber auch Schwefel ist in den Regionen mit hohen Zählraten zu entdecken. Für bei- de Supernova-Überreste wurde der beste Fit mit dem Modell vpshock gefunden. In diesem Modell wird ein Plasma angenommen, das bei konstanter Temperatur plan-parallel geschockt wird. Um diesen Fit zu erzielen wurden die Parameter für die Elemente Fe, S, Si, Mg, O und Ne variiert. Die restlichen Elemente wurden auf die solare Häufigkeit festgelegt. Bei CTB 109 befinden sich die Temperaturen (kT) in den Regionen mit hohen Zählraten im Bereich zwischen 0.6 und 0.7 keV und liegen damit im selben Bereich, der bereits mit anderen Teleskopen für CTB 109 gefunden wurde. In den Regionen mit niedrigen Zählraten liegen die Temperaturen etwas tiefer mit 0.3-0.4 keV. Im Supernova-Überrest RCW 103 wurde nur eine Region mit hoher Zählrate analysiert und eine Temperatur von 0.57 keV gefunden, während in der Region mit niedriger Zählrate die Temperatur kT = 0.36 ± 0.08 keV beträgt. Beide Werte passen zu den Werten in CTB 109. Die einzelnen Elementlinien wurden zusätzlich mit einer Gauß-Verteilung angepasst und die Flüsse ermittelt. Diese wurden in Intensitätskarten aufgetragen, in denen die unterschiedlichen Verteilungen der Elemente über den Supernova-Überrest zu sehen sind. Während Silizium in einigen wenigen Regionen geklumpt auftritt, ist Magnesium über die Überreste verteilt und hat in einigen Regionen höhere Werte als Silizium. Das lässt den Schluss zu, dass die beiden Elemente auf unterschiedliche Weise aus der Explosion herausgeschleudert wurden. Die Verteilung ist hier durchaus asymmetrisch, es ist jedoch nicht möglich dies auf eine asymmetrische Explosion der Supernova zurückzuführen. Dafür müssen mehr als zwei Supernova-Überreste mit dieser Methode untersucht werden und mit einer noch nicht vorhandenen Theorie zur Verteilung der Elemente in Überresten verglichen werden. Im direkten Vergleich der beiden bisher untersuchten Supernova-Überreste CTB 109 und RCW 103 sieht man, dass die beiden Überreste sich sehr in der Temperatur und der Verteilung der Elemente ähneln. Das lässt auf eine einheitliche Ausbreitung der Elemente innerhalb der Supernova-Überreste schließen. Silizium wird aufgrund der Explosion in fingerartigen Strukturen, die Rayleigh-Taylor-Instabilitäten, nach außen transportiert. Dabei bildet es Klumpen, die mit den weiter außen liegenden Schalen reagieren. Magnesium und Neon hingegen werden hauptsächlich in den Brennphasen vor der Explosion und in den äußeren Schichten des Sterns, der Zwiebelschalenstruktur, produziert. Dadurch ist eine ausgedehnte Verteilung zu er- warten. Diese Verteilungen der drei Elemente ist in dieser Arbeit bestätigt worden. Während Magnesium und Neon über den gesamten Überrest hohe Flüsse aufweisen, ist Silizium sehr lokal im Lobe von CTB 109 und im hellen Süden von RCW 103 zu finden. Mit zukünftigen Röntgenteleskopen, die eine höhere räumliche Auflösung ermöglichen, könnten die beobachteten Zusammenhänge zwischen der asymmetrischen Elementverteilung im Supernovaüberrest und den Mechanismen der Elemententstehung in der Supernova weiter untersucht werden.
Two-particle excitations, such as spin and charge excitations, play a key role in high-Tc cuprate superconductors (HTSC). Due to the antiferromagnetism of the parent compound the magnetic excitations are supposed to be directly related to the mechanism of superconductivity. In particular, the so-called resonance mode is a promising candidate for the pairing glue, a bosonic excitation mediating the electronic pairing. In addition, its interactions with itinerant electrons may be responsible for some of the observed properties of HTSC. Hence, getting to the bottom of the resonance mode is crucial for a deeper understanding of the cuprate materials . To analyze the corresponding two-particle correlation functions we develop in the present thesis a new, non-perturbative and parameter-free technique for T=0 which is based on the Variational Cluster Approach (VCA, an embedded cluster method for one-particle Green's functions). Guided by the spirit of the VCA we extract an effective electron-hole vertex from an isolated cluster and use a fully renormalized bubble susceptibility chi0 including the VCA one-particle propagators.Within our new approach, the magnetic excitations of HTSC are shown to be reproduced for the Hubbard model within the relevant strong-coupling regime. Exceptionally, the famous resonance mode occurring in the underdoped regime within the superconductivity-induced gap of spin-flip electron-hole excitations is obtained. Its intensity and hourglass dispersion are in good overall agreement with experiments. Furthermore, characteristic features such as the position in energy of the resonance mode and the difference of the imaginary part of the susceptibility in the superconducting and the normal states are in accord with Inelastic Neutron Scattering (INS) experiments. For the first time, a strongly-correlated parameter-free calculation revealed these salient magnetic properties supporting the S=1 magnetic exciton scenario for the resonance mode. Besides the INS data on magnetic properties further important new insights were gained recently via ARPES (Angle-Resolved Photoemission-Spectroscopy) and Raman experiments which disclosed a quite different doping dependence of the antinodal compared to the near-nodal gap. This thesis provides an approach to the Raman response similar to the magnetic case for inspecting this gap dichotomy. In agreement with experiments and one-particle data obtained in the VCA, we recover the antinodal gap decreasing and the near-nodal gap increasing as a function of doping. Hence, our results prove the Hubbard model to account for these salient gap features. In summary, we develop a two-particle cluster approach which is appropriate for the strongly-correlated regime and contains no free parameter. Our results obtained with this new approach combined with the phase diagram and the one-particle excitations obtained in the VCA strongly constitute a Hubbard model description of HTSC cuprate materials.
The astronomical exploration at energies between 30\,GeV and $\lesssim$\,350\,GeV was the main motivation for building the \MAGIC-telescope. With its 17\,m \diameter\ mirror it is the worldwide largest imaging air-Cherenkov telescope. It is located at the Roque de los Muchachos at the Canary island of San Miguel de La Palma at 28.8$^\circ$\,N, 17.8$^\circ$\,W, 2200\,m a.s.l. The telescope detects Cherenkov light produced by relativistic electrons and positrons in air showers initiated by cosmic gamma-rays. The imaging technique is used to powerfully reject the background due to hadronically induced air showers from cosmic rays. Their inverse power-law energy-distribution leads to an increase of the event rate with decreasing energy threshold. For \MAGIC this implies a trigger rate in the order of 250\,Hz, and a correspondingly large data stream to be recorded and analyzed. A robust analysis software package, including the general framework \MARS, was developed and commissioned to allow automation, necessary for data taken under variable observing conditions. Since many of the astronomical sources of high-energy radiation, in particular the enigmatic gamma-ray bursts, are of a transient nature, the telescope was designed to allow repositioning in several tens of seconds, keeping a tracking accuracy of $\lesssim\,$0.01$^\circ$. Employing a starguider, a tracking accuracy of $\lesssim\,$1.3\,minutes of arc was obtained. The main class of sources at very high gamma-ray energies, known from previous imaging air-Cherenkov telescopes, are Active Galactic Nuclei with relativistic jets, the so-called high-peaked Blazars. Their spectrum is entirely dominated by non-thermal emission, spanning more than 15 orders of magnitude in energy, from radio to gamma-ray energies. Predictions based on radiation models invoking a synchrotron self-Compton or hadronic origin of the gamma-rays suggest, that a fairly large number of them should be detectable by \MAGIC. Promising candidates have been chosen from existing compilations, requiring high (synchrotron) X-ray flux, assumed to be related to a high (possibly inverse-Compton) flux at GeV energies, and a low distance, in oder to avoid strong attenuation due to pair-production in interactions with low-energy photons from the extragalactic background radiation along the line of sight. Based on this selection the first \AGN, emitting gamma-rays at 100\,GeV, 1ES\,1218+304 at a redshift of $z=0.182$, was discovered, one of the two farthest known \AGN emitting in the TeV energy region. In this context, the automated analysis chain was successfully demonstrated. The source was observed in January 2005 during six moonless nights for 8.2\,h. At the same time the collaborating \KVA-telescope, located near the \MAGIC site, observed in the optical band. The lightcurve calculated showed no day-to-day variability and is compatible with a constant flux of $F($\,$>$\,$100\,\mbox{GeV})=(8.7\pm1.4) \cdot 10^{-7}\,\mbox{m}^{-2}\,\mbox{s}^{-1}$ within the statistical errors. A differential spectrum between 87\,GeV and 630\,GeV was calculated and is compatible with a power law of $F_E(E) = (8.1\pm 2.1) \cdot 10^{-7}(E/\mbox{250\,GeV})^{-3.0\pm0.4}\,\mbox{TeV}^{-1}\,\mbox{m}^{-2}\,\mbox{s}^{-1}$ within the statistical errors. The spectrum emitted by the source was obtained by taking into account the attenuation due to pair-production with photons of the extragalactic background at low photon energies. A homogeneous, one-zone synchrotron self-Compton model has been fitted to the collected multi-wavelength data. Using the simultaneous optical data, a best fit model could be obtained from which some physical properties of the emitting plasma could be inferred. The result was compared with the so-called {\em Blazar sequence}.
Over the last two decades, accompanied by their prediction and ensuing realization, topological non-trivial materials like topological insulators, Dirac semimetals, and Weyl semimetals have been in the focus of mesoscopic condensed matter research. While hosting a plethora of intriguing physical phenomena all on their own, even more fascinating features emerge when superconducting order is included. Their intrinsically pronounced spin-orbit coupling leads to peculiar, time-reversal symmetry protected surface states, unconventional superconductivity, and even to the emergence of exotic bound states in appropriate setups.
This Thesis explores various junctions built from - or incorporating - topological materials in contact with superconducting order, placing particular emphasis on the transport properties and the proximity effect.
We begin with the analysis of Josephson junctions where planar samples of mercury telluride are sandwiched between conventional superconducting contacts. The surprising observation of pronounced excess currents in experiments, which can be well described by the Blonder-Tinkham-Klapwijk theory, has long been an ambiguous issue in this field, since the necessary presumptions are seemingly not met. We propose a resolution to this predicament by demonstrating that the interface properties in hybrid nanostructures of distinctly different materials yet corroborate these assumptions and explain the outcome. An experimental realization is feasible by gating the contacts. We then proceed with NSN junctions based on time-reversal symmetry broken Weyl semimetals and including superconducting order. Due to the anisotropy of the electron band structure, both the transport properties as well as the proximity effect depend substantially on the orientation of the interfaces between the materials. Moreover, an imbalance can be induced in the electron population between Weyl nodes of opposite chirality, resulting in a non-vanishing spin polarization of the Cooper pairs leaking into the normal contacts. We show that such a system features a tunable dipole character with possible applications in spintronics. Finally, we consider partially superconducting surface states of three-dimensional topological insulators. Tuning such a system into the so-called bipolar setup, this results in the formation of equal-spin Cooper pairs inside the superconductor, while simultaneously acting as a filter for non-local singlet pairing. The creation and manipulation of these spin-polarized Cooper pairs can be achieved by mere electronic switching processes and in the absence of any magnetic order, rendering such a nanostructure an interesting system for superconducting spintronics. The inherent spin-orbit coupling of the surface state is crucial for this observation, as is the bipolar setup which strongly promotes non-local Andreev processes.
Die stetige Degradation von Halbleiterlasern, speziell bei Bleichalkogenidlasern, erfordert in spektroskopischen Systemen eine regelmäßige Überwachung typischer Eigenschaften wie Abstimmcharakteristik und Linienbreite. Im Hinblick auf einen möglichst hohen Automatisierungsgrad wird langfristig eine Online-Analysemethode zur Überwachung notwendig sein. Die üblicherweise verwendete Methode, den Laserarbeitspunkt über zugrunde liegende Modenkarten einzustellen, hat den gravierenden Nachteil, dass solche Modenkarten in der Regel nicht unter dynamischen Modulationsbedingungen vermessen wurden. Gerade im dynamischen Fall sind diese Karten empfindlich abhängig gegenüber Veränderungen durch Zyklieren und Degradieren des Lasers. Etalons (Etalonsignale) sind bezüglich der Abstimmcharakteristik nicht zuverlässig genug und von daher für eine wünschenswerte Automatisierung nicht ausreichen. Modensprünge oder schwache Rückkopplungseffekte lassen sich im Interferogramm nicht ohne weiteres identifiziert. Eine erweiterte Analyse der Störungen dieser Interferogramme im Zeit-Frequenzbereich mittels einer AOK(Adaptive Optimal Kernel)-Transformation erwies sich speziell bei Signalen mit wenigen Perioden als deutlich aussagekräftiger. Mittels optischer Homodynmischung wurde die Linienbreite von Bleichalkogenidlasern ermittelt. Bei inkohärenter Überlagerung entspricht die spektrale Verteilung der Mischung der Faltung der ursprünglichen Verteilung mit sich selbst. Der Laser wird dabei nicht abgestimmt, die optische Laufzeitverzögerung wurde mittels integrierter White-Zelle realisiert. Es wurde beobachtet, dass je nach Grad des Rauschens des Injektionsstroms, das Linienbreitenprofil von Lorentz nach Gauß überging. Mit einem externen CO2-Laser als lokalen Oszillator wurden Heterodynmessungen durchgeführt. Die Linienbreite eines CO2-Lasers ist mit wenigen kHz im Vergleich zu derjenigen eines Bleichalkogenidlasers vernachlässigbar und die Überlagerung erfolgt absolut inkohärent. Gemessen wurden spektrale Verteilungen mit typischem Lorentzprofil von 10 MHz bis zu 100 MHz und darüber hinaus. Auffällig waren häufig symmetrische Nebenpeaks, die in den Bereichen der Seitenflanken des Lorentzprofils auftraten. Anhand einer numerischen Simulation eines Modells einer Laserdiode, basierend auf Ratengleichungen mit für Bleichalkogenidlasern typischen Parameterwerten, konnte verdeutlicht werden, dass sich durch das nichtlineare Lasermodell ausgeprägte Vielfache von Resonanzen bereits im Abstand von 25 MHz ausbilden können. Derartige Resonanzen tauchen im E-Feld-Spektrum als typische Relaxationsoszillationen in den Seitenbändern wieder auf und erklären die in der Messung beobachteten Nebenpeaks innerhalb der spektralen Verteilung. Die Stärke der Seitenbänder ist ein Maß für die Korrelation zwischen Phasen- und Amplitudenfluktuationen. Das Modell für die numerische Berechnung des E-Feldes wurde mit einem thermischen Verhalten erweitert. Eine umfassende Charakterisierungsmethode zur automatisierten Einstellung eines modulierten Lasersystems muss dynamisch und zeitaufgelöst erfolgen. Die Auswertung optischer Mischfrequenzen beschränkt sich dabei nicht mehr auf die direkte Interpretation von einzelnen Spektren, sondern erweitert sich auf die Analyse im Zeit-Frequenzraum. Für eine direkte und schnelle Zeitfrequenztransformation bietet sich ein „Gefensterte Fouriertransformation“ (STFT) an, die sich außerdem relativ einfach in moderne Signalprozessortechnik implementieren lässt. Sie erweist sich als sehr robust und für die hier erforderliche Analyse von Heterodynsignalen als ausreichend. Mit der Festlegung des Analysefensters innerhalb einer STFT ist die Auflösung in Zeit und Frequenz fest definiert. Analysen von Mischsignalen mit einer kontinuierlichen Wavelettransformation haben vergleichsweise gezeigt, dass Details im Zeitfrequenzraum zwar besser herausgearbeitet werden können, jedoch ist der Rechenaufwand durch die variable Skalierung und somit stark redundante Analyse und ihre Darstellung unverhältnismäßig größer. Eine Analyse des Linienbreitenprofils erfolgt dabei über die Entwicklung der Skalierung eines Signals. Die über Heterodynsignale ermittelte effektive Linienbreite bei einer modulierten Abstimmung sollte eher als „dynamische“ oder „intrinsische“ Laserlinienbreite bezeichnet werden. Eine direkte Korrelation der Frequenzvariation des Lasers mit dem Stromrauschen des Injektionsstroms ist offensichtlich. Die wirksame Bandbreite des Stromrauschens wird durch die Systemelektronik einerseits und die Modulationsbandbreite des Lasers andererseits begrenzt. Außer den wichtigen Parametern wie Abstimmung und Linienbreite lassen sich über die dynamische Zeitfrequenzanalyse von Heterodynsignalen darüber hinaus weitere Phänomene wie Rückkopplung, Modenüberlagerung oder Einschwingverhalten aufgrund direkter Kopplung zwischen Intensitäts und Frequenzmodulation beobachten.
We report magnetotransport studies on a gated strained HgTe device. This material is a three-dimensional topological insulator and exclusively shows surface-state transport. Remarkably, the Landau-level dispersion and the accuracy of the Hall quantization remain unchanged over a wide density range (3×1011 cm−2<n<2×1012 cm−2). These observations imply that even at large carrier densities, the transport is surface-state dominated, where bulk transport would have been expected to coexist already. Moreover, the density dependence of the Dirac-type quantum Hall effect allows us to identify the contributions from the individual surfaces. A k⋅p model can describe the experiments but only when assuming a steep band bending across the regions where the topological surface states are contained. This steep potential originates from the specific screening properties of Dirac systems and causes the gate voltage to influence the position of the Dirac points rather than that of the Fermi level.
In a first part the bilayer Heisenberg Model and the 2D Kondo necklace model are studied. Both models exhibit a quantum phase transition between an ordered and disordered phase. The question is addressed to the coupling of a single doped hole to the critical fluctuations. A self-consistent Born approximation predicts that the doped hole couples to the magnons such that the quasiparticle residue vanishes at the quantum critical point. In this work the delicate question about the fate of the quasiparticle residue across the quantum phase transition is also tackled by means of large scale quantum Monte Carlo simulations. Furthermore the dynamics of a single hole doped in the magnetic background is investigated. In the second part an analysis of the spiral staircase Heisenberg ladder is presented. The ladder consists of two ferromagnetic coupled spin-1/2 chains, where the coupling within the second chain can be tuned by twisting the ladder. Within this model the crossover between an ungapped spin-1/2 system and a gapped spin-1 system can be studied. In this work the emphasis is on the opening of the spin gap with respect to the ferromagnetic rung coupling. It is shown that there are essential differences in the scaling behavior of the spin gap depending on the twist of the model. Moreover, by means of the string order parameter it is shown, that the system remains in the Haldane phase within the whole parameter range although the spin gap scales differently. The tools which are used for the analyses are mainly large scale quantum Monte Carlo methods, but also exact diagonalization techniques as well as mean field approaches.
In this PhD thesis, the fingerprints of geometry and topology on low dimensional mesoscopic systems are investigated. In particular, holographic non-equilibrium transport properties of the quantum spin Hall phase, a two dimensional time reversal symmetric bulk insulating phase featuring one dimensional gapless helical edge modes are studied. In these metallic helical edge states, the spin and the direction of motion of the charge carriers are locked to each other and counter-propagating states at the same energy are conjugated by time reversal symmetry. This phenomenology entails a so called topological protection against elastic single particle backscattering by time reversal symmetry. We investigate the limitations of this topological protection by studying the influence of inelastic processes as induced by the interplay of phonons and extrinsic spin orbit interaction and by taking into account multi electron processes due to electron-electron interaction, respectively. Furthermore, we propose possible spintronics applications that rely on a spin charge duality that is uniquely associated with the quantum spin Hall phase. This duality is present in the composite system of two helical edge states with opposite helicity as realized on the two opposite edges of a quantum spin Hall sample with ribbon geometry. More conceptually speaking, the quantum spin Hall phase is the first experimentally realized example of a symmetry protected topological state of matter, a non-interacting insulating band structure which preserves an anti-unitary symmetry and is topologically distinct from a trivial insulator in the same symmetry class with totally localized and hence independent atomic orbitals. In the first part of this thesis, the reader is provided with a fairly self-contained introduction into the theoretical concepts underlying the timely research field of topological states of matter. In this context, the topological invariants characterizing these novel states are viewed as global analogues of the geometric phase associated with a cyclic adiabatic evolution. Whereas the detailed discussion of the topological invariants is necessary to gain deeper insight into the nature of the quantum spin Hall effect and related physical phenomena, the non-Abelian version of the local geometric phase is employed in a proposal for holonomic quantum computing with spin qubits in quantum dots.
We represent the Z2 topological invariant characterizing a one-dimensional topological superconductor using a Wess–Zumino–Witten dimensional extension. The invariant is formulated in terms of the single-particle Green’s function which allows us to classify interacting systems. Employing a recently proposed generalized Berry curvature method, the topological invariant is represented independent of the extra dimension requiring only the single-particle Green’s function at zero frequency of the interacting system. Furthermore, a modified twisted boundary conditions approach is used to rigorously define the topological invariant for disordered interacting systems.
Context. In active galaxies, matter is accreted onto super massive black holes (SMBH). This accretion process causes a region roughly the size of our solar system to outshine the entire host galaxy, forming an active galactic nucleus (AGN). In some of these active galaxies, highly relativistic particle jets are formed parallel to the rotation axis of the super massive black hole. A fraction of these sources is observed under a small inclination angle between the pointing direction of the jet and the observing line of sight. These sources are called blazars. Due to the small inclination angle and the highly relativistic speeds of the particles in the jet, beaming effects occur in the radiation of these particles. Blazars can be subdivided into the high luminosity flat spectrum radio quasars (FSRQs) and the low luminosity BL Lacertae objects (BL Lacs). As all AGN, blazars are broadband emitters and therefore observable from the longest wavelengths in the radio regime to the shortest wavelengths in the gamma-ray regime. In this thesis I will analyze blazars at these two extremes with respect to their parsec-scale properties in the radio and their time evolution properties in gamma-ray flux.
Method. In the radio regime the technique of very long baseline interferometry (VLBI) can be used in order to spatially resolve the synchrotron radiation coming from those objects down to sub-parsec scales. This information can be used to observe the time evolution of the structure of such sources. This is done in large monitoring programs such as the MOJAVE (15 GHz) and the Boston University blazar monitoring program (43 GHz). In this thesis I utilize data of 28 sources from these monitoring programs spanning 10 years of observation from 2003 to 2013, resulting in over 1800 observed epochs, to study the brightness temperature and diameter gradients of these jets. I conduct a search for systematic geometry transitions in the radio jets. The synchrotron cooling time in the radio core of the jets is used to determine the magnetic field strength in the radio core. Considering the jet geometry, these magnetic field strengths are scaled to the ergosphere of the SMBH in order to obtain the distance of the radio core to the SMBH.
In the gamma-regime these blazars cannot be spatially resolved. Due to this, it is hard to put strong constrains onto where the gamma-ray emitting region is. Blazars have shown to be variable at high energies on time scales down to minutes. The nature of this variability can be studied in order to put constrains on the particle acceleration mechanism and possibly the region and size of the gamma-ray emitting region. The variability of blazars in the energy range between 0.1 GeV and 1 GeV can for example be observed with the pair-conversion telescope on board the Fermi satellite. I use 10 years of data from the Fermi-LAT (LAT: Large Area Telescope) satellite in order to study the variability of a large sample of blazars (300-800, depending on the used significance filters for data points). I quantify this variability with the Ornstein-Uhlenbeck (OU) parameters and the power spectral density (PSD) slopes. The same procedure is applied to 20 light curves available for the radio sample.
Results. The diameter evolution along the jet axis of the radio sources suggests, that FSRQs feature flatter gradients than BL Lacs. Fitting these gradients, it is revealed that BL Lacs are systematically better described by a simple single power law than FSRQs. I found 9 sources with a strongly constrained geometry transition. The sources are 0219+421, 0336-019, 0415+379, 0528+134, 0836+710, 1101+384, 1156+295, 1253-055 and 2200+420. In all of these sources, the geometry transition regions are further out in the jet than the Bondi sphere. The magnetic field strengths of BL Lacs is systematically larger than that of FSRQs. However the scaling of these fields suggest that the radio cores of BL Lac objects are closer to the SMBHs than the radio cores of FSRQs. Analyzing the variability of Fermi-LAT light curves yields consistent results for all samples. FSRQs show systematically steeper PSD slopes and feature OU parameters more favorable to strong variability than BL Lacs. The Fermi-LAT light curves of the sub-sample of radio jets, suggest an anticorrelation between the jet complexity from the radio observations and the OU-parameters as well as the PSD slopes from the gamma-ray observations.
Conclusion.
The flatter diameter gradients of FSRQs suggest that these sources are more collimated further down the jet than BL Lacs. The systematically better description of the diameter and brightness temperature gradient by a single power law of BL Lacs, suggest that FSRQs are more complex with respect to the diameter evolution along the jet and the surface brightness distribution than BL Lac objects. FSRQs often feature regions where recollimation can occur in distinct knots within the jets. For the sources where a geometry transition could be constrained, the Bondi radius, being systematically smaller than the position of the transition region along the jet axis, suggest that changing pressure gradients are not the sole cause for these systematic geometry transitions. Nevertheless they may be responsible for recollimation regions, found typically downstream the jet, beyond the Bondi radius and the transition zone. The difference in the distance of the radio cores between FSRQs and BL Lacs is most likely due to the combination of differences in SMBH masses and systematically smaller jet powers in BL Lacs. The variability in energy ranges above 100 MeV and above 1 GeV-regime suggest that many light curves of BL Lac objects are more likely to be white noise while the PSD slopes and the OU parameters of FSRQ gamma-ray light curves favor stronger variability on larger time scales with respect to the time binning of the analyzed light curve. Although the anticorrelation of the jet complexity acquired from the radio observations and the PSD slopes and OU parameters from the gamma-observations suggest that more complex sources favor OU parameters and PSD slopes resulting in more variability (not white noise) it is beyond the scope of this thesis to pinpoint whether this correlation results from causation. The question whether a complex jet causes more gamma-ray variability or more gamma-ray variability causes more complex jets cannot be answered at this point. Nevertheless the computed correlation measures suggest that this dependence is most likely not linear and therefore an indication that these effects might even interact.
Im Rahmen dieser Arbeit wurde ein dreidimensionaler vollrelativistischer und parallelisierter Particle-in-Cell Code geschrieben, ausführlich getestet und angewandt. Der Code ACRONYM ist variabel einsetzbar und von der Genauigkeit und Stabilität her State-of-the-Art und somit konkurrenzfähig zu den sonstigen in der Astrophysik eingesetzten Codes anderer Gruppen. Die Energie bleibt bis auf einen Fehler von < 0.03% erhalten, die Divergenz des Magnetfeldes bleibt immer unter einem Wert von 10^{-12} und die Skalierung wurde mittlerweile bis zu einem Clustergröße von einigen 10000 CPUs getestet. In dieser Arbeit wurde dann, nach der Entwicklung des Codes, der Einfluss des fundamentalen Massenverhältnisses m_p/m_e auf die Teilchenbeschleunigung durch Plasmainstabilitäten untersucht. Dies ist relevant und wichtig, da in PiC-Simulationen in den allermeisten Fällen nicht mit dem realen Massenverhältnis gerechnet wird, da sonst viel zu viel Rechenleistung benötigt würde, um zu sehen, was mit den Protonen geschieht und was ihr Einfluss auf die leichten Teilchen wie Elektronen und Positronen ist. Zu diesem Zweck wurden Simulationen mit Massenverhältnissen zwischen m_p/m_e = 1.0 und 200.0 durchgeführt. Diese haben alle gemeinsam, dass periodische Randbedingungen verwendet wurden und das zur Verfügung stehende Simulationsgebiet mit jeweils zwei gegeneinander strömenden Plasmapopulationen vollständig gefüllt wurde, um jegliche Art von auftretenden Schocks auszuschließen. Die Rohdaten der einzelnen Simulationen wurden auf vielfältige Art und Weise analysiert, es wurden z.B. Schnitte durch die Teilchenverteilung erstellt, sowie ein- oder zweidimensionale Histogramme und Energieverläufe betrachtet. Dabei haben sich folgende Kernpunkte ergeben: Für Massenverhältnisse bis etwa m_p/m_e = 20 bildet sich die gesamte Zweistrom-Instabilität in nur einer Phase aus, das heißt, es bilden sich von ringförmigen Magnetfeldern umgebene Flussschläuche aus, die dann verschmelzen, bis nur noch zwei übrig sind und alle Teilchen werden über den gesamten Verlauf der Instabilität beschleunigt. Es ist damit zu folgern, dass die unterschiedlich schweren Teilchenspezies Protonen und Elektronen/Positronen durch die relativ nahe beieinander liegenden Massen noch so stark gekoppelt sind, dass sich nur eine Instabilität entwickeln kann. Bei großen Massenverhältnissen (m_p/m_e > 20) ist eine deutliche Trennung in zwei Phasen der Instabilität zu erkennen. Zuerst bilden sich wiederum Flussschläuche aus, diese verschmelzen miteinander (zu zweien oder mehr), bevor der erste Teil der Instabilität abflaut. Anschließend entstehen wieder ringförmige Magnetfelder und Flussschläuche, von denen einer meist deutlich stärker ist als all die anderen, das bedeutet, dass dieser von stärkeren Magnetfeldern umgeben ist und eine höhere Teilchendichte aufweist. Im Rahmen dieser zweigeteilten Instabilität werden die Elektronen und Positronen nur in der ersten Phase signifikant beschleunigt, die deutlich schwereren Protonen gewinnen über den gesamten Zeitraum Energie. Die höchstenergetischen Teilchen erreichen im Ruhesystem der jeweiligen Plasmapopulation Werte um gamma = 250. Man kann daraus für zukünftige Untersuchungen mit Hilfe von Particle-in-Cell Codes den Schluss ziehen, dass Rückschlüsse auf das tatsächliche Verhalten beim realen Massenverhältnis von m_p/m_e = 1836.2 nur aus den Simulationen mit m_p/m_e >> 20 gezogen werden können, da die starke Kopplung der leichten und schweren Teilchen bei kleineren Massenverhältnissen die Ergebnisse sehr stark beeinflusst. Es wurde anhand der gemessenen Zeitpunkte der Instabilitätsmaxima eine Extrapolation durchgeführt, die zeigt, dass die Instabilität beim realen Massenverhältnis etwa bei t = 1400 omega_{pe}^{-1} auftreten würde. Um dies wirklich zu simulieren müsste allerdings mehr als die 1000-fache Anzahl an CPU-Stunden aufgewandt werden. Des weiteren wurde eine Maxwell-Jüttner-Verteilung an die Teilchenverteilungen der einzelnen Simulationen auf dem Höhepunkt der Instabilität gefittet, um sowohl die neue Temperatur des Plasmas als auch die Beschleunigungseffizienz des Prozesses zu berechnen. Die Temperatur erhöht sich demnach durch die Instabilität von etwa 10^8K auf 10^{10} bis 10^{11}K, der Anteil suprathermischer Teilchen beträgt 2 bis 4%.
It is natural to consider the possibility that the most energetic particles detected (> 10^18 eV), ultra-high-energy cosmic rays (UHECRs), are originated at the most luminous transient events observed (> 10^52 erg s^-1), gamma-ray bursts (GRBs). As a result of the interaction of highly-accelerated, magnetically-confined protons and ions with the photon field inside the burst, both neutrons and UHE neutrinos are expected to be created: the former escape the source and beta-decay into protons which propagate to Earth, where they are detected as UHECRs, while the latter, if detected, would constitute the smoking gun of hadronic acceleration in the sources.
Recently, km-scale neutrino telescopes such as IceCube have finally reached the sensitivities required to probe the neutrino predictions of some of the existing GRB models. On that account, we present here a revised, self-consistent model of joint UHE proton and neutrino production at GRBs that includes a state-of-the-art, improved numerical calculation of the neutrino flux (NeuCosmA); that uses a generalised UHECR emission model where some of the protons in the sources are able to "leak out" of their magnetic confinement before having interacted; and that takes into account the energy losses of the protons during their propagation to Earth. We use our predictions to take a close look at the cosmic ray-neutrino connection and find that the current UHECR observations by giant air shower detectors, together with the upper bounds on the flux of neutrinos from GRBs, are already sufficient to put tension on several possibilities of particle emission and propagation, and to point us towards some requirements that should be fulfilled by GRBs if they are to be the sources of the UHECRs. We further refine our analysis by studying a dynamical burst model, where we find that the different particle species originate at distinct stages of the expanding GRB, each under particular conditions. Finally, we consider a possibility of new physics: the effect of neutrino decay in the flux of UHE neutrinos from GRBs. On the whole, our results demonstrate that self-consistent models of particle production are now integral to the advancement of the field, given that the full picture of the UHE Universe will only emerge as a result of looking at the multi-messenger sky, i.e., at gamma-rays, cosmic rays, and neutrinos simultaneously.
The quantum Hall (QH) effect, which can be induced in a two-dimensional (2D) electron gas by an external magnetic field, paved the way for topological concepts in condensed matter physics. While the QH effect can for that reason not exist without Landau levels, there is a plethora of topological phases of matter that can exist even in the absence of a magnetic field. For instance, the quantum spin Hall (QSH), the quantum anomalous Hall (QAH), and the three-dimensional (3D) topological insulator (TI) phase are insulating phases of matter that owe their nontrivial topology to an inverted band structure. The latter results from a strong spin-orbit interaction or, generally, from strong relativistic corrections. The main objective of this thesis is to explore the fate of these preexisting topological states of matter, when they are subjected to an external magnetic field, and analyze their connection to quantum anomalies. In particular, the realization of the parity anomaly in solid state systems is discussed. Furthermore, band structure engineering, i.e., changing the quantum well thickness, the strain, and the material composition, is employed to manipulate and investigate various topological properties of the prototype TI HgTe.
Like the QH phase, the QAH phase exhibits unidirectionally propagating metallic edge channels. But in contrast to the QH phase, it can exist without Landau levels. As such, the QAH phase is a condensed matter analog of the parity anomaly. We demonstrate that this connection facilitates a distinction between QH and QAH states in the presence of a magnetic field. We debunk therefore the widespread belief that these two topological phases of matter cannot be distinguished, since they are both described by a $\mathbb{Z}$ topological invariant. To be more precise, we demonstrate that the QAH topology remains encoded in a peculiar topological quantity, the spectral asymmetry, which quantifies the differences in the number of states between the conduction and valence band. Deriving the effective action of QAH insulators in magnetic fields, we show that the spectral asymmetry is thereby linked to a unique Chern-Simons term which contains the information about the QAH edge states. As a consequence, we reveal that counterpropagating QH and QAH edge states can emerge when a QAH insulator is subjected to an external magnetic field. These helical-like states exhibit exotic properties which make it possible to disentangle QH and QAH phases. Our findings are of particular importance for paramagnetic TIs in which an external magnetic field is required to induce the QAH phase.
A byproduct of the band inversion is the formation of additional extrema in the valence band dispersion at large momenta (the `camelback'). We develop a numerical implementation of the $8 \times 8$ Kane model to investigate signatures of the camelback in (Hg,Mn)Te quantum wells. Varying the quantum well thickness, as well as the Mn-concentration, we show that the class of topologically nontrivial quantum wells can be subdivided into direct gap and indirect gap TIs. In direct gap TIs, we show that, in the bulk $p$-regime, pinning of the chemical potential to the camelback can cause an onset to QH plateaus at exceptionally low magnetic fields (tens of mT). In contrast, in indirect gap TIs, the camelback prevents the observation of QH plateaus in the bulk $p$-regime up to large magnetic fields (a few tesla). These findings allowed us to attribute recent experimental observations in (Hg,Mn)Te quantum wells to the camelback. Although our discussion focuses on (Hg,Mn)Te, our model should likewise apply to other topological materials which exhibit a camelback feature in their valence band dispersion.
Furthermore, we employ the numerical implementation of the $8\times 8$ Kane model to explore the crossover from a 2D QSH to a 3D TI phase in strained HgTe quantum wells. The latter exhibit 2D topological surface states at their interfaces which, as we demonstrate, are very sensitive to the local symmetry of the crystal lattice and electrostatic gating. We determine the classical cyclotron frequency of surface electrons and compare our findings with experiments on strained HgTe.
Vevacious: a tool for finding the global minima of one-loop effective potentials with many scalars
(2013)
Several extensions of the Standard Model of particle physics contain additional scalars implying a more complex scalar potential compared to that of the Standard Model. In general these potentials allow for charge- and/or color-breaking minima besides the desired one with correctly broken SU(2) L ×U(1) Y . Even if one assumes that a metastable local minimum is realized, one has to ensure that its lifetime exceeds that of our universe. We introduce a new program called Vevacious which takes a generic expression for a one-loop effective potential energy function and finds all the tree-level extrema, which are then used as the starting points for gradient-based minimization of the one-loop effective potential. The tunneling time from a given input vacuum to the deepest minimum, if different from the input vacuum, can be calculated. The parameter points are given as files in the SLHA format (though is not restricted to supersymmetric models), and new model files can be easily generated automatically by the Mathematica package SARAH. This code uses HOM4PS2 to find all the minima of the tree-level potential, PyMinuit to follow gradients to the minima of the one-loop potential, and CosmoTransitions to calculate tunneling times.
One of the most popular extensions of the SM is Supersymmetry (SUSY). It is a symmetry relating fermions and bosons and also the only feasible extension to the symmetries of spacetime. With SUSY it is then possible to explain some of the open questions left by the SM while at the same time opening the possibility of gauge unification at a high scale. SUSY theories require the addition of new particles, in particular an extra Higgs doublet and at least as many new scalars as fermions in the SM. Much in the same way that the Higgs boson breaks SU (2)L symmetry, these new scalars can break any symmetry for which they carry a charge through spontaneous symmetry breaking.
Let us assume there is a local minimum of the potential that reproduces the correct phenomenol- ogy for a parameter point of a given model. By exploring whether there are other deeper minima with VEVs that break symmetries we want to conserve, like SU (3)C or U (1)EM , it is possible to exclude regions of parameter space where that happens. The local minimum with the correct phenomenology might still be metastable, so it is also necessary to calculate the probability of tunneling between minima.
In this work we propose and apply a framework to constrain the parameter space of models with many scalars through the minimization of the one-loop eff e potential and the calculation of tunneling times at zero and non zero temperature.After a brief discussion about the shortcomings of the SM and an introduction of the basics of SUSY, we introduce the theory and numerical methods needed for a successful vacuum stability analysis. We then present Vevacious, a public code where we have implemented our proposed framework. Afterwards we go on to analyze three interesting examples.
For the constrained MSSM (CMSSM) we explore the existence of charge- and color- breaking (CCB) minima and see how it constraints the phenomenological relevant region of its parameter space at T = 0. We show that the regions reproducing the correct Higgs mass and the correct relic density for dark matter all overlap with regions suffering from deeper CCB minima.
Inspired by the results for the CMSSM, we then consider the natural MSSM and check the region of parameter space consistent with the correct Higgs mass against CCB minima at T /= 0. We find that regions of parameter space with CCB minima overlap significantly with that reproducing the correct Higgs mass. When thermal eff are considered the majority of such points are then found to have a desired symmetry breaking minimum with very low survival probability. In both these studies we find that analytical conditions presented in the literature fail in dis- criminating regions of parameter space with CCB minima. We also present a way of adapting our framework so that it runs quickly enough for use with parameter fit studies.
Lastly we show a different example of using vacuum stability in a phenomenological study. For the BLSSM we investigate the violation of R-parity through sneutrino VEVs and where in parameter space does this happen. We find that previous analyses in literature fail to identify regions with R-parity conservation by comparing their results to our full numerical analysis.