Refine
Has Fulltext
- yes (246)
Is part of the Bibliography
- yes (246)
Year of publication
Document Type
- Doctoral Thesis (139)
- Journal article (105)
- Master Thesis (1)
- Other (1)
Keywords
- Monte-Carlo-Simulation (12)
- Supersymmetrie (12)
- Topologischer Isolator (12)
- topological insulators (10)
- Blazar (9)
- Aktiver galaktischer Kern (8)
- Elementarteilchenphysik (8)
- LHC (8)
- physics (8)
- Astrophysik (7)
Institute
- Institut für Theoretische Physik und Astrophysik (246) (remove)
Sonstige beteiligte Institutionen
In recent years many discoveries have been made that reveal a close relation between quantum information and geometry in the context of the AdS/CFT correspondence. In this duality between a conformal quantum field theory (CFT) and a theory of gravity on Anti-de Sitter spaces (AdS) quantum information quantities in CFT are associated with geometric objects in AdS. Subject of this thesis is the examination of this intriguing property of AdS/CFT. We study two central elements of quantum information: subregion complexity -- which is a measure for the effort required to construct a given reduced state -- and the modular Hamiltonian -- which is given by the logarithm of a considered reduced state.
While a clear definition for subregion complexity in terms of unitary gates exists for discrete systems, a rigorous formulation for quantum field theories is not known.
In AdS/CFT, subregion complexity is proposed to be related to certain codimension one regions on the AdS side.
The main focus of this thesis lies on the examination of such candidates for gravitational duals of subregion complexity.
We introduce the concept of \textit{topological complexity}, which considers subregion complexity to be given by the integral over the Ricci scalar of codimension one regions in AdS. The Gauss-Bonnet theorem provides very general expressions for the topological complexity of CFT\(_2\) states dual to global AdS\(_3\), BTZ black holes and conical defects. In particular, our calculations show that the topology of the considered codimension one bulk region plays an essential role for topological complexity.
Moreover, we study holographic subregion complexity (HSRC), which associates the volume of a particular codimension one bulk region with subregion complexity. We derive an explicit field theory expression for the HSRC of vacuum states. The formulation of HSRC in terms of field theory quantities may allow to investigate whether this bulk object indeed provides a concept of subregion complexity on the CFT side. In particular, if this turns out to be the case, our expression for HSRC may be seen as a field theory definition of subregion complexity. We extend our expression to states dual to BTZ black holes and conical defects.
A further focus of this thesis is the modular Hamiltonian of a family of states \(\rho_\lambda\) depending on a continuous parameter \(\lambda\). Here \(\lambda\) may be associated with the energy density or the temperature, for instance.
The importance of the modular Hamiltonian for quantum information is due to its contribution to relative entropy -- one of the very few objects in quantum information with a rigorous definition for quantum field theories.
The first order contribution in \(\tilde{\lambda}=\lambda-\lambda_0\) of the modular Hamiltonian to the relative entropy between \(\rho_\lambda\) and a reference state \(\rho_{\lambda_0}\) is provided by the first law of entanglement. We study under which circumstances higher order contributions in \(\tilde{\lambda}\) are to be expected.
We show that for states reduced to two entangling regions \(A\), \(B\) the modular Hamiltonian of at least one of these regions is expected to provide higher order contributions in \(\tilde{\lambda}\) to the relative entropy if \(A\) and \(B\) saturate the Araki-Lieb inequality. The statement of the Araki-Lieb inequality is that the difference between the entanglement entropies of \(A\) and \(B\) is always smaller or equal to the entanglement entropy of the union of \(A\) and \(B\).
Regions for which this inequality is saturated are referred to as entanglement plateaux. In AdS/CFT the relation between geometry and quantum information provides many examples for entanglement plateaux. We apply our result to several of them, including large intervals for states dual to BTZ black holes and annuli for states dual to black brane geometries.
The modular Hamiltonian of reduced states, given essentially by the logarithm of the reduced density matrix, plays an important role within the AdS/CFT correspondence in view of its relation to quantum information. In particular, it is an essential ingredient for quantum information measures of distances between states, such as the relative entropy and the Fisher information metric. However, the modular Hamiltonian is known explicitly only for a few examples. For a family of states rho(lambda) that is parametrized by a scalar lambda, the first order contribution in (lambda) over tilde = lambda-lambda(0) of the modular Hamiltonian to the relative entropy between rho(lambda) and a reference state rho(lambda 0) is completely determined by the entanglement entropy, via the first law of entanglement. For several examples, e.g. for ball-shaped regions in the ground state of CFTs, higher order contributions are known to vanish. In these cases the modular Hamiltonian contributes to the Fisher information metric in a trivial way. We investigate under which conditions the modular Hamiltonian provides a non-trivial contribution to the Fisher information metric, i.e. when the contribution of the modular Hamiltonian to the relative entropy is of higher order in (lambda) over tilde. We consider one-parameter families of reduced states on two entangling regions that form an entanglement plateau, i.e. the entanglement entropies of the two regions saturate the Araki-Lieb inequality. We show that in general, at least one of the relative entropies of the two entangling regions is expected to involve (lambda) over tilde contributions of higher order from the modular Hamiltonian. Furthermore, we consider the implications of this observation for prominent AdS/CFT examples that form entanglement plateaux in the large N limit.
We consider the computation of volumes contained in a spatial slice of AdS(3) in terms of observables in a dual CFT. Our main tool is kinematic space, defined either from the bulk perspective as the space of oriented bulk geodesics, or from the CFT perspective as the space of entangling intervals. We give an explicit formula for the volume of a general region in a spatial slice of AdS(3) as an integral over kinematic space. For the region lying below a geodesic, we show how to write this volume purely in terms of entangling entropies in the dual CFT. This expression is perhaps most interesting in light of the complexity = volume proposal, which posits that complexity of holographic quantum states is computed by bulk volumes. An extension of this idea proposes that the holographic subregion complexity of an interval, defined as the volume under its Ryu-Takayanagi surface, is a measure of the complexity of the corresponding reduced density matrix. If this is true, our results give an explicit relationship between entanglement and subregion complexity in CFT, at least in the vacuum. We further extend many of our results to conical defect and BTZ black hole geometries.
The idea that our observable Universe may have originated from a quantum tunneling event out of an eternally inflating false vacuum state is a cornerstone of the multiverse paradigm. Modern theories that are considered as an approach towards the ultraviolet-complete fundamental theory of particles and gravity, such as the various types of string theory, even suggest that a vast landscape of different vacuum configurations exists, and that gravitational tunneling is an important mechanism with which the Universe can explore this landscape. The tunneling scenario also presents a unique framework to address the initial conditions of our observable Universe. In particular, it allows to introduce deviations from the cosmological concordance model in a controlled and well-motivated way. These deviations are a central topic of this work. An important feature in most of the theories mentioned above is the presumed existence of additional space dimensions in excess of the three which we observe in our every-day experience. It was realized that these extra dimensions could avoid our detection if they are compactified to microscopic length scales far beyond the reach of current experiments. There also seem to be natural mechanisms available for dynamical compactification in those theories. These typically lead to a vast landscape of different vacuum configurations which also may differ in the number of macroscopic dimensions, only the total number of dimensions being determined by the theory. Transitions between these vacuum configurations may hence open up new directions which were previously compact, spontaneously compactify some previously macroscopic directions, or otherwise re-arrange the configuration of compact and macroscopic dimensions in a more general way. From within the bubble Universe, such a process may be perceived as an anisotropic background spacetime - intuitively, the dimensions which open up may give rise to preferred directions. If our 3+1 dimensional observable Universe was born in a process as described above, one may expect to find traces of a preferred direction in cosmological observations. For instance, two directions could be curved like on a sphere, while the third space direction is flat. Using a scenario of gravitational tunneling to fix the initial conditions, I show how the primordial signatures in such an anisotropic Universe can be obtained in principle and work out a particular example in more detail. A small deviation from isotropy also has phenomenological consequences for the later evolution of the Universe. I discuss the most important effects and show that backreaction can be dynamically important. In particular, under certain conditions, a buildup of anisotropic stress in different components of the cosmic fluid can lead to a dynamical isotropization of the total stress-energy tensor. The mechanism is again demonstrated with the help of a physical example.
The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and
A highly significant excess of high-energy astrophysical neutrinos has been reported by the IceCube Collaboration. Some features of the energy and declination distributions of IceCube events hint at a North/South asymmetry of the neutrino flux. This could be due to the presence of the bulk of our Galaxy in the Southern hemisphere. The ANTARES neutrino telescope, located in the Mediterranean Sea, has been taking data since 2007. It offers the best sensitivity to muon neutrinos produced by galactic cosmic ray interactions in this region of the sky. In this letter a search for an extended neutrino flux from the Galactic Ridge region is presented. Different models of neutrino production by cosmic ray propagation are tested. No excess of events is observed and upper limits for different neutrino flux spectral indices Γ are set. For Γ=2.4 the 90% confidence level flux upper limit at 100 TeV for one neutrino flavour corresponds to Φ\(^{1f}_{0}\) (100 TeV) = 2.0 · 10\(^{−17}\) GeV\(^{−1}\) cm\(^{−2}\)s\(^{−1}\)sr\(^{−1}\). Under this assumption, at most two events of the IceCube cosmic candidates can originate from the Galactic Ridge. A simple power-law extrapolation of the Fermi-LAT flux to account for IceCube High Energy Starting Events is excluded at 90% confidence level.
A search for high-energy neutrino emission correlated with gamma-ray bursts outside the electromagnetic prompt-emission time window is presented. Using a stacking approach of the time delays between reported gamma-ray burst alerts and spatially coincident muon-neutrino signatures, data from the Antares neutrino telescope recorded between 2007 and 2012 are analysed. One year of public data from the IceCube detector between 2008 and 2009 have been also investigated. The respective timing profiles are scanned for statistically significant accumulations within 40 days of the Gamma Ray Burst, as expected from Lorentz Invariance Violation effects and some astrophysical models. No significant excess over the expected accidental coincidence rate could be found in either of the two data sets. The average strength of the neutrino signal is found to be fainter than one detectable neutrino signal per hundred gamma-ray bursts in the Antares data at 90% confidence level.
A search for Secluded Dark Matter annihilation in the Sun using 2007-2012 data of the ANTARES neutrino telescope is presented. Three different cases are considered: a) detection of dimuons that result from the decay of the mediator, or neutrino detection from: b) mediator that decays into a dimuon and, in turn, into neutrinos, and c) mediator that decays directly into neutrinos. As no significant excess over background is observed, constraints are derived on the dark matter mass and the lifetime of the mediator.
A search for muon neutrinos originating from dark matter annihilations in the Sun is performed using the data recorded by the ANTARES neutrino telescope from 2007 to 2012. In order to obtain the best possible sensitivities to dark matter signals, an optimisation of the event selection criteria is performed taking into account the background of atmospheric muons, atmospheric neutrinos and the energy spectra of the expected neutrino signals. No significant excess over the background is observed and 90% C.L. upper limits on the neutrino flux, the spin-dependent and spin-independent WIMP-nucleon cross-sections are derived for WIMP masses ranging from 50 GeV to 5 TeV for the annihilation channels WIMP + WIMP→ b\(\overline{b}\), W\(^{+}\)W\(^{−}\) and τ\(^{+}\)τ\(^{−}\).
We consider the process of muon-electron elastic scattering, which has been proposed as an ideal framework to measure the running of the electromagnetic coupling constant at space-like momenta and determine the leading-order hadronic contribution to the muon g-2 (MUonE experiment). We compute the next-to-leading (NLO) contributions due to QED and purely weak corrections and implement them into a fully differential Monte Carlo event generator, which is available for first experimental studies. We show representative phenomenological results of interest for the MUonE experiment and examine in detail the impact of the various sources of radiative corrections under different selection criteria, in order to study the dependence of the NLO contributions on the applied cuts. The study represents the first step towards the realisation of a high-precision Monte Carlo code necessary for data analysis.