### Refine

#### Has Fulltext

- yes (9)

#### Is part of the Bibliography

- yes (9)

#### Document Type

- Doctoral Thesis (9)

#### Language

- English (9)

#### Keywords

#### Institute

#### Sonstige beteiligte Institutionen

- ATLAS Collaboration (1)
- CERN (1)

Leptoquarks are hypothetical particles that attempt to explain the coincidental similarities between leptons and quarks included in SM. Their exact properties vary between different theoretical models, and there are no strong theoretical constraints on their possible mass values. They can possibly be produced from particle
collisions, and there have already been searching efforts at previous collider experiments. Their presence have yet been observed, and this fact has been translated into lower bound exclusions on their possible mass values. The Large Hadron Collider (LHC) being the most recently constructed particle collider with the highest collision energies ever achieved experimentally, provides a new platform to continue the search for Leptoquarks at even higher mass ranges.
This thesis describes a search for pair-produced second-generation Leptoquarks using 20.3 fb−1 of data recorded by the ATLAS detector of LHC at √s = 8 TeV. Events with two oppositely charged muons and two or more jets in the final state were used. Candidate leptoquark events were selected with the help of four observables: the di-muon invariant mass (Mμμ ), the sum of the pT of the two muons
(LT ), the sum of the pT of the two leading jets (HT ) and the average Leptoquark mass (MLQ ). Monte Carlo simulations of SM background processes have shown
to be in good agreement with data, both in the region constructed using selection requirements for candiate leptoquark events and in the designated control regions.
Since no significant excess of events was observed in data, a exclusion limit was set as a function of the Leptoquark mass.

It is natural to consider the possibility that the most energetic particles detected (> 10^18 eV), ultra-high-energy cosmic rays (UHECRs), are originated at the most luminous transient events observed (> 10^52 erg s^-1), gamma-ray bursts (GRBs). As a result of the interaction of highly-accelerated, magnetically-confined protons and ions with the photon field inside the burst, both neutrons and UHE neutrinos are expected to be created: the former escape the source and beta-decay into protons which propagate to Earth, where they are detected as UHECRs, while the latter, if detected, would constitute the smoking gun of hadronic acceleration in the sources.
Recently, km-scale neutrino telescopes such as IceCube have finally reached the sensitivities required to probe the neutrino predictions of some of the existing GRB models. On that account, we present here a revised, self-consistent model of joint UHE proton and neutrino production at GRBs that includes a state-of-the-art, improved numerical calculation of the neutrino flux (NeuCosmA); that uses a generalised UHECR emission model where some of the protons in the sources are able to "leak out" of their magnetic confinement before having interacted; and that takes into account the energy losses of the protons during their propagation to Earth. We use our predictions to take a close look at the cosmic ray-neutrino connection and find that the current UHECR observations by giant air shower detectors, together with the upper bounds on the flux of neutrinos from GRBs, are already sufficient to put tension on several possibilities of particle emission and propagation, and to point us towards some requirements that should be fulfilled by GRBs if they are to be the sources of the UHECRs. We further refine our analysis by studying a dynamical burst model, where we find that the different particle species originate at distinct stages of the expanding GRB, each under particular conditions. Finally, we consider a possibility of new physics: the effect of neutrino decay in the flux of UHE neutrinos from GRBs. On the whole, our results demonstrate that self-consistent models of particle production are now integral to the advancement of the field, given that the full picture of the UHE Universe will only emerge as a result of looking at the multi-messenger sky, i.e., at gamma-rays, cosmic rays, and neutrinos simultaneously.

One of the most popular extensions of the SM is Supersymmetry (SUSY). It is a symmetry relating fermions and bosons and also the only feasible extension to the symmetries of spacetime. With SUSY it is then possible to explain some of the open questions left by the SM while at the same time opening the possibility of gauge unification at a high scale. SUSY theories require the addition of new particles, in particular an extra Higgs doublet and at least as many new scalars as fermions in the SM. Much in the same way that the Higgs boson breaks SU (2)L symmetry, these new scalars can break any symmetry for which they carry a charge through spontaneous symmetry breaking.
Let us assume there is a local minimum of the potential that reproduces the correct phenomenol- ogy for a parameter point of a given model. By exploring whether there are other deeper minima with VEVs that break symmetries we want to conserve, like SU (3)C or U (1)EM , it is possible to exclude regions of parameter space where that happens. The local minimum with the correct phenomenology might still be metastable, so it is also necessary to calculate the probability of tunneling between minima.
In this work we propose and apply a framework to constrain the parameter space of models with many scalars through the minimization of the one-loop eff e potential and the calculation of tunneling times at zero and non zero temperature.After a brief discussion about the shortcomings of the SM and an introduction of the basics of SUSY, we introduce the theory and numerical methods needed for a successful vacuum stability analysis. We then present Vevacious, a public code where we have implemented our proposed framework. Afterwards we go on to analyze three interesting examples.
For the constrained MSSM (CMSSM) we explore the existence of charge- and color- breaking (CCB) minima and see how it constraints the phenomenological relevant region of its parameter space at T = 0. We show that the regions reproducing the correct Higgs mass and the correct relic density for dark matter all overlap with regions suffering from deeper CCB minima.
Inspired by the results for the CMSSM, we then consider the natural MSSM and check the region of parameter space consistent with the correct Higgs mass against CCB minima at T /= 0. We find that regions of parameter space with CCB minima overlap significantly with that reproducing the correct Higgs mass. When thermal eff are considered the majority of such points are then found to have a desired symmetry breaking minimum with very low survival probability. In both these studies we find that analytical conditions presented in the literature fail in dis- criminating regions of parameter space with CCB minima. We also present a way of adapting our framework so that it runs quickly enough for use with parameter fit studies.
Lastly we show a different example of using vacuum stability in a phenomenological study. For the BLSSM we investigate the violation of R-parity through sneutrino VEVs and where in parameter space does this happen. We find that previous analyses in literature fail to identify regions with R-parity conservation by comparing their results to our full numerical analysis.

It is generally agreed upon the fact that the Standard Model of particle physics can only be viewed as an effective theory that needs to be extended as it leaves some essential questions unanswered. The exact realization of the necessary extension is subject to discussion. Supersymmetry is among the most promising approaches to physics beyond the Standard Model as it can simultaneously solve the hierarchy problem and provide an explanation for the dark matter abundance in the universe. Despite further virtues like gauge coupling unification and radiative electroweak symmetry breaking, minimal supersymmetric models cannot be the ultimate answer to the open questions of the Standard Model as they still do not incorporate neutrino masses and are besides heavily constrained by LHC data. This does, however, not derogate the beauty of the concept of supersymmetry. It is therefore time to explore non-minimal supersymmetric models which are able to close these gaps, review their consistency, test them against experimental data and provide prospects for future experiments.
The goal of this thesis is to contribute to this process by exploring an extraordinarily well motivated class of models which bases upon a left-right symmetric gauge group. While relaxing the tension with LHC data, those models automatically include the ingredients for neutrino masses.
We start with a left-right supersymmetric model at the TeV scale in which scalar \(SU(2)_R\) triplets are responsible for the breaking of left-right symmetry as well as for the generation of neutrino masses. Although a tachyonic doubly-charged scalar is present at tree-level in this kind of models, we show by performing the first complete one-loop evaluation that it gains a real mass at the loop level. The constraints on the predicted additional charged gauge bosons are then evaluated using LHC data, and we find that we can explain small excesses in the data of which the current LHC run will reveal if they are actual new physics signals or just background fluctuations. In a careful evaluation of the loop-corrected scalar potential we then identify parameter regions in which the vacuum with the phenomenologically correct symmetry-breaking properties is stable. Conveniently, those regions favour low left-right symmetry breaking scales which are accessible at the LHC.
In a slightly modified version of this model where a \(U(1)_R × U(1)_{B−L}\) gauge symmetry survives down to the TeV scale, we implement a minimal gauge-mediated supersymmetry breaking mechanism for which we calculate the boundary conditions in the presence of gauge kinetic mixing. We show how the presence of the extended gauge group raises the tree-level Higgs mass considerably so that the need for heavy supersymmetric spectra is relaxed. Taking the constraints from the Higgs sector into account, we then explore the LHC phenomenology of this model and point out where the expected collider signatures can be distinguished from standard scenarios.
In particular if neutrino masses are explained by low-scale seesaw mechanisms as is done throughout this work, there are potentially spectacular signals at low-energy experiments which search for charged lepton flavour violation. The last part of this thesis is dedicated to the detailed exploration of processes like μ → e γ, μ → 3 e or μ−e conversion in nuclei in a supersymmetric framework with an inverse seesaw mechanism. In particular, we disprove claims about a non-decoupling effect in Z-mediated three-body decays and study the prospects for discovering and distinguishing signals at near-future experiments. In this context we identify the possibility to deduce from ratios like BR(\(τ → 3 μ\))/BR(\(τ → μ e^+ e^−\)) whether the contributions from ν − W loops dominate over supersymmetric contributions or vice versa.

The results of two analyses searching for supersymmetry (SUSY) in data of the ATLAS experiment are presented in this thesis. The data were recorded in proton-proton collisions at the Large Hadron Collider in 2012 at a centre of mass energy of \(\sqrt{s}\)=8 TeV and correspond to an integrated luminosity of 20.3 fb\(^{−1}\). The first search is performed in signatures containing an opposite-sign electron or muon pair, which is compatible with originating from a Z boson decay, in addition to jets and large missing transverse momentum. The analysis targets the production of squarks and gluinos in R-parity conserving (RPC) models with SUSY breaking via General Gauge Mediation (GGM). The main Standard Model (SM) backgrounds are \(t\overline t\), WW, W+t and Z to \(\tau \tau\) processes which are entirely estimated from data using different-flavour events. Besides that, the SM production of Z bosons in association with jets and large fake missing momentum from mismeasurements plays a role and is predicted with the data-driven jet smearing method. Backgrounds from events with fake leptons are estimated with the data-driven matrix method. WZ/ZZ production as well as smaller background contributions are determined from Monte-Carlo simulations. The search observes an excess of data over the SM prediction with a local significance of 3.0 \(\sigma\) in the electron channel, 1.7 \(\sigma\) in the muon channel and 3.0 \(\sigma\) when the two channels are added together. The results are used to constrain the parameters of the GGM model. The second analysis uses the already published results of an ATLAS search for SUSY in events with one isolated electron or muon, jets and missing transverse momentum to reinterpret them in the context of squark and gluino production in SUSY models with R-parity violating (RPV) \(LQ\overline D\)-operators. In contrast to RPC models, the lightest SUSY particle (LSP) is not stable but decays into SM particles. "Standard" analyses often do not consider SUSY models with RPV although they are in principle sensitive to them. The exclusion limits on the squark and gluino mass obtained from the reinterpretation extend up to 1200 GeV. These are the first results by any ATLAS SUSY search which systematically cover a wide range of RPV couplings in the case of prompt LSP decays. However, the analysis is not sensitive to the full parameter space of the \(LQ\overline D\)-model and reveals gaps in the ATLAS SUSY program which have to be closed by dedicated search strategies in the future.

In this thesis two main projects are presented, both aiming at the overall goal
of particle detector development. In the first part of the thesis detailed shielding
studies are discussed, focused on the shielding section of the planned New Small
Wheel as part of the ATLAS detector upgrade. Those studies supported the discussions
within the upgrade community and decisions made on the final design of
the New Small Wheel. The second part of the thesis covers the design, construction
and functional demonstration of a test facility for gaseous detectors at the
University of Würzburg. Additional studies on the trigger system of the facility are
presented. Especially the precision and reliability of reference timing signals were
investigated.

The measurement of the mass of the $W$ boson is currently one of the most promising precision analyses of the Standard Model, that could ultimately reveal a hint for new physics.
The mass of the $W$ boson is determined by comparing the $W$ boson, which cannot be reconstructed directly, to the $Z$ boson, where the full decay signature is available. With the help of Monte Carlo simulations one can extrapolate from the $Z$ boson to the $W$ boson.
Technically speaking, the measurement of the $W$ boson mass is performed by comparing data taken by the ATLAS experiment to a set of calibrated Monte Carlo simulations, which reflect different mass hypotheses.\
A dedicated calibration of the reconstructed objects in the simulations is crucial for a high precision of the measured value.
The comparison of simulated $Z$ boson events to reconstructed $Z$ boson candidates in data allows to derive event weights and scale factors for the calibration.
This thesis presents a new approach to reweight the hadronic recoil in the simulations. The focus of the calibration is on the average hadronic activity visible in the mean of the scalar sum of the hadronic recoil $\Sigma E_T$ as a function of pileup. In contrast to the standard method, which directly reweights the scalar sum, the dependency to the transverse boson momentum is less strongly affected here.
The $\Sigma E_T$ distribution is modeled first by means of its pileup dependency. Then, the remaining differences in the resolution of the vector sum of the hadronic recoil are scaled. This is done separately for the parallel and the pterpendicular component of the hadronic recoil with respect to the reconstructed boson.
This calibration was developed for the dataset taken by the ATLAS experiment at a center of mass energy of $8\,\textrm{TeV}$ in 2012. In addition, the same reweighting procedure is applied to the recent dataset with a low pileup contribution, the \textit{lowMu} runs at $5\,\textrm{TeV}$ and at $13\,\textrm{TeV}$, taken by ATLAS in November 2017. The dedicated aspects of the reweighting procedure are presented in this thesis. It can be shown that this reweighting approach improves the agreement between data and the simulations effectively for all datasets.
The uncertainties of this reweighting approach as well as the statistical errors are evaluated for a $W$ mass measurement by a template fit to pseudodata for the \textit{lowMu} dataset. A first estimate of these uncertainties is given here. For the pfoEM algorithm a statistical uncertainty of $17\,\text{MeV}$ for the $5\,\textrm{TeV}$ dataset and of $18\,\text{MeV}$ for the $13\,\textrm{TeV}$ are found for the $W \rightarrow \mu \nu$ analysis. The systematic uncertainty introduced by the resolution scaling has the largest effect, a value of $15\,\text{MeV}$ is estimated for the $13\,\textrm{TeV}$ dataset in the muon channel.

The quest for finding a unifying theory for both quantum theory and gravity lies at the heart of much of the research in high energy physics. Although recent years have witnessed spectacular experimental confirmation of our expectations from Quantum Field Theory and General Relativity, the question of unification remains as a major open problem. In this context, the perturbative aspects of quantum black holes represent arguably the best of our knowledge of how to proceed in this pursue.
In this thesis we investigate certain aspects of quantum gravity in 2 + 1 dimensional anti-de Sitter space (AdS3), and its connection to Conformal field theories in 1 + 1 dimensions (CFT2), via the AdS/CFT correspondence.
We study the thermodynamics properties of higher spin black holes. By focusing on the spin-4 case, we show that black holes carrying higher spin charges display a rich phase diagram in the grand canonical ensemble, including phase transitions of the Hawking-Page type, first order inter-black hole transitions, and a second order critical point.
We investigate recent proposals on the connection between bulk codimension-1 volumes and computational complexity in the CFT. Using Tensor Networks we provide concrete evidence of why these bulk volumes are related to the number of gates in a quantum circuit, and exhibit their topological properties. We provide a novel formula to compute this complexity directly in terms of entanglement entropies, using techniques from Kinematic space.
We then move in a slightly different direction, and study the quantum properties of black holes via de Functional Renormalisation Group prescription coming from Asymptotic safety. We avoid the arbitrary scale setting by restricting to a narrower window in parameter space, where only Newton’s coupling and the cosmological constant are allowed to vary. By one assumption on the properties of Newton’s coupling, we find black hole solutions explicitly. We explore their thermodynamical properties, and discover that very large black holes exhibit very unusual features.

This work consists of two parts. On the one hand, it describes simulation and
measurement of the effect of contaminations of the detector gas on the performance
of particle detectors, with special focus on Micromegas detectors. On the other
hand, it includes the setup of a production site for the finalization of drift panels
which are going to be used in the ATLAS NSW. The first part augments these
two parts to give an introduction into the theoretical foundations of gaseous particle
detectors.