Refine
Has Fulltext
- yes (11)
Is part of the Bibliography
- yes (11)
Year of publication
- 2011 (11) (remove)
Document Type
- Doctoral Thesis (11) (remove)
Keywords
- Kosmologie (3)
- AGN (2)
- Astrophysik (2)
- Blazar (2)
- Elementarteilchenphysik (2)
- Mathematisches Modell (2)
- Neutrino (2)
- Strahlung (2)
- blazar (2)
- Aktive Galaxienkerne Blazare (1)
Institute
- Institut für Theoretische Physik und Astrophysik (11) (remove)
At the present day the idea of cosmological inflation constitutes an important extension of Big Bang theory. Since its appearance in the early 1980’s many physical mechanisms have been worked out that put the inflationary expansion of space that proceeds the Hot Big Bang on a sound theoretical basis. Among the achievements of the theory of inflation are the explanaition of the almost Euclidean geometry of ‘visible’space, the homogeneity of the cosmic background radiation but, in particular, also the tiny inhomogeneity of a relative amplitude of 10−5. In many models of inflation the inflationary phase ends only locally. Hence, there exists the possibility that the inflationary process still goes on in regions beyond our visual horizon. This property is commonly termed ‘eternal inflation’. In the framework of a cosmological scalar fields, eternal inflation can manifest itself in a variety of ways. On the one hand fluctuations of the field, if sufficiently large, can work against the classical trajectory and therefore counteract the end of inflation. In regions where this is the case the accelerated expansion of space continues at a higher rate. In parts of this region the process may replicate itself again and in this way may continue throughout all of time. Space and field are said to reproduce themselves. On the other hand, a mechanism that can occur in addition or independent of the latter, is so called vacuum tunneling. If the potential of the scalar field has several local minima, a semi-classical calculation suggests that within a spherical region, a bubble, the field can tunnel to another state. The respective tunneling rates depend on the potential difference and the shape of the potential between the states. Generally, the tunneling rate is exponentially suppressed, which means that the inflation lasts for a long time before tunneling takes place. The ongoing inflationary process effectively reduces local curvature, anistotropy and inhomogeneity, so that this property is known as the ‘cosmic no-hair conjecture’. For this reason cosmological considerations of the evolution of bubbles thus far almost entirely involved vacuum (de Sitter) backgrounds. However, new insights in the framework of string theory suggest high tunneling rates which allow for the possibility of bubble nucleation in non-vacuum dominated backgrounds. In this case the evolution of the bubble depends on the properties of the background spacetime. A deeper introduction in chapter 4 is followed by the presentation of the Lemaître-Tolman spacetime in chapter 5 which constitutes the background spacetime in the study of the effect of matter and inhomogeneity on the evolution of vacuum bubbles. In chapter 6 we explicitly describe the application of the ‘thin-shell’ formalism and the resulting system of equations. This is succeeded in chapter 7 by the detailed analysis of bubble evolution in various limits of the Lemaître-Tolman spacetime and a Robertson-Walker spacetime with a rapid phase transition. The central observations are that the presence of dust, at a fixed surface energy density, goes along with a smaller nucleation volume and possibly leads to a a collapse of the bubble. In an expanding background, the radially inhomogeneous dust profile is efficiently diluted so that there is essentially no effect on the evolution of the domain wall. This changes in a radially inhomogeneous curvature profile, positive curvature decelerates the expansion of the bubble. Moreover, we point out that the adopted approach does not allow for a treatment of a, physically expected, matter transfer so that the results are to be understood as preliminary under this caveat. In the second part of this thesis we consider potential observable consequences of bubble collisions in the cosmic microwave background radiation. The topological nature of the signal suggests the use of statistics that are well suited to quantify the morphological properties of the temperature fluctuations. In chapter 10 we present Minkowski Functionals (MFs) that exactly provide such statistics. The presented error analysis allows for a higher precision of numerical MFs in comparison to earlier methods. In chapter 12 we present the application of our algorithm to a Gaussian and a collision map. We motivate the expected MFs and extract their numerical counterparts. We find that our least-squares fitting procedure accurately reproduces an underlying signal only when a large number of realizations of maps are averaged over, while for a single WMAP and PLANCK resolution map, only when a highly prominent disk, with |δT| = 2√σG and ϑd = 40◦, we are able to recover the result. This is unfortunate, as it means that MF are intrinsically too noisy to be able to distinguish cold and hot spots in the CMB for small sizes.
Die vorliegende Arbeit beschäftigt sich mit der Chaossynchronisation in Netzwerken mit zeitverzögerten Kopplungen. Ein Netzwerk chaotischer Einheiten kann isochron und vollständig synchronisieren, auch wenn der Austausch der Signale einer oder mehreren Verzögerungszeiten unterliegt. In einem Netzwerk identischer Einheiten hat sich als Stabilitätsanalyse die Methode der Master Stability Funktion von Pecora und Carroll etabliert. Diese entspricht für ein Netzwerk gekoppelter iterativer Bernoulli-Abbildungen Polynomen vom Grade der größten Verzögerungszeit. Das Stabilitätsproblem reduziert sich somit auf die Untersuchung der Nullstellen dieser Polynome hinsichtlich ihrer Lage bezüglich des Einheitskreises. Eine solche Untersuchung kann beispielsweise numerisch mit dem Schur-Cohn-Theorem erfolgen, doch auch analytische Ergebnisse lassen sich erzielen. In der vorliegenden Arbeit werden Bernoulli-Netzwerke mit einer oder mehreren zeitverzögerten Kopplungen und/oder Rückkopplungen untersucht. Hierbei werden Aussagen über Teile des Stabilitätsgebietes getroffen, welche unabhängig von den Verzögerungszeiten sind. Des Weiteren werden Aussagen zu Systemen gemacht, welche sehr große Verzögerungszeiten aufweisen. Insbesondere wird gezeigt, dass in einem Bernoulli-Netzwerk keine stabile Chaossynchronisation möglich ist, wenn die vorhandene Verzögerungszeit sehr viel größer ist als die Zeitskala der lokalen Dynamik, bzw. der Lyapunovzeit. Außerdem wird in bestimmten Systemen mit mehreren Verzögerungszeiten anhand von Symmetriebetrachtungen stabile Chaossynchronisation ausgeschlossen, wenn die Verzögerungszeiten in bestimmten Verhältnissen zueinander stehen. So ist in einem doppelt bidirektional gekoppeltem Paar ohne Rückkopplung und mit zwei verschiedenen Verzögerungszeiten stabile Chaossynchronisation nicht möglich, wenn die Verzögerungszeiten in einem Verhältnis von teilerfremden ungeraden ganzen Zahlen zueinander stehen. Es kann zudem Chaossynchronisation ausgeschlossen werden, wenn in einem bipartiten Netzwerk mit zwei großen Verzögerungszeiten zwischen diesen eine kleine Differenz herrscht. Schließlich wird ein selbstkonsistentes Argument vorgestellt, das das Auftreten von Chaossynchronisation durch die Mischung der Signale der einzelnen Einheiten interpretiert und sich unter anderem auf die Teilerfremdheit der Zyklen eines Netzes stützt. Abschließend wird untersucht, ob einige der durch die Bernoulli-Netzwerke gefundenen Ergebnisse sich auf andere chaotische Netzwerke übertragen lassen. Hervorzuheben ist die sehr gute Übereinstimmung der Ergebnisse eines Bernoulli-Netzwerkes mit den Ergebnissen eines gleichartigen Netzwerkes gekoppelter Halbleiterlasergleichungen, sowie die Übereinstimmungen mit experimentellen Ergebnissen eines Systems von Halbleiterlasern.
The idea that our observable Universe may have originated from a quantum tunneling event out of an eternally inflating false vacuum state is a cornerstone of the multiverse paradigm. Modern theories that are considered as an approach towards the ultraviolet-complete fundamental theory of particles and gravity, such as the various types of string theory, even suggest that a vast landscape of different vacuum configurations exists, and that gravitational tunneling is an important mechanism with which the Universe can explore this landscape. The tunneling scenario also presents a unique framework to address the initial conditions of our observable Universe. In particular, it allows to introduce deviations from the cosmological concordance model in a controlled and well-motivated way. These deviations are a central topic of this work. An important feature in most of the theories mentioned above is the presumed existence of additional space dimensions in excess of the three which we observe in our every-day experience. It was realized that these extra dimensions could avoid our detection if they are compactified to microscopic length scales far beyond the reach of current experiments. There also seem to be natural mechanisms available for dynamical compactification in those theories. These typically lead to a vast landscape of different vacuum configurations which also may differ in the number of macroscopic dimensions, only the total number of dimensions being determined by the theory. Transitions between these vacuum configurations may hence open up new directions which were previously compact, spontaneously compactify some previously macroscopic directions, or otherwise re-arrange the configuration of compact and macroscopic dimensions in a more general way. From within the bubble Universe, such a process may be perceived as an anisotropic background spacetime - intuitively, the dimensions which open up may give rise to preferred directions. If our 3+1 dimensional observable Universe was born in a process as described above, one may expect to find traces of a preferred direction in cosmological observations. For instance, two directions could be curved like on a sphere, while the third space direction is flat. Using a scenario of gravitational tunneling to fix the initial conditions, I show how the primordial signatures in such an anisotropic Universe can be obtained in principle and work out a particular example in more detail. A small deviation from isotropy also has phenomenological consequences for the later evolution of the Universe. I discuss the most important effects and show that backreaction can be dynamically important. In particular, under certain conditions, a buildup of anisotropic stress in different components of the cosmic fluid can lead to a dynamical isotropization of the total stress-energy tensor. The mechanism is again demonstrated with the help of a physical example.
Diese Arbeit beschäftigt sich mit Strahlungsprozessen in Blazaren. Bei den Blazaren handelt es sich um eine Unterkategorie der aktiven Galaxienkerne, bei denen die Jetachse in Richtung des Beobachters zeigt. Charakteristisch für die Blazare ist ein Multifrequenzspektrum der Photonen, welches sich vom Radiobereich bis hin zur Gamma-Strahlung mit TeV-Energien erstreckt. Insbesondere der Gamma-Bereich rückt aktuell in den Fokus der Betrachtung mit Experimenten wie zum Beispiel FERMI und MAGIC. Ziel dieser Arbeit ist die Modellierung der auftretenden Strahlungsprozesse und die Beschreibung der Multifrequenzspektren der Blazare mit Hilfe eines hadronisch-leptonischen Modells. Grundlage hierfür ist ein selbstkonsistentes Synchrotron-Selbst-Compton-Modell (SSC), welches zur Beschreibung des Spektrums der Quelle 1 ES 1218+30.4 verwendet wird. Dabei wird die Parameterwahl unterstützt durch eine Abschätzung der Masse des zentralen schwarzen Loches. Das hier behandelte SSC-Modell wird dahingehend untersucht, wie es sich unter Veränderung der Modellparameter verhält. Dabei werden Abhängigkeiten des Photonenspektrums von Änderungsfaktoren der Parameter abgeleitet. Außerdem werden diese Abhängigkeiten in Relation gesetzt und aus dieser Betrachtung ergibt sich die Schlussfolgerung, dass unter der Voraussetzung eines festen Spektralindex der Elektronenverteilung die Wahl eines Parametersatzes zur Modellierung eines Photonenspektrums eindeutig ist. Zur Einführung eines zeitabhängigen, hadronischen Modells wird das SSCModell um die Anwesenheit nichtthermischer Protonen erweitert. Dadurch kann Proton-Synchrotron-Strahlung einen Beitrag im Gamma-Bereich leisten. Außerdem werden durch Proton-Photon-Wechselwirkung Pionen erzeugt. Aus deren Zerfall werden zusammen mit der Paarbildung aus Photon-Photon-Absorption sekundäre Elektronen und Positronen produziert, die wiederum zum Hochenergiespektrum beitragen. Neben den Pionen werden bei der Proton-Photon- Wechselwirkung außerdem noch Neutrinos und Neutronen erzeugt, die einen direkten Einblick in die Emissionsregion erlauben. Das hier vorgestellte hadronische Modell wird auf die Quelle 3C 279 angewandt. Für diese Quelle reicht mit der Detektion im VHE-Bereich der SSCAnsatz nicht aus, um das Photonenspektrum zu beschreiben. Mit dem vorgelegten Modell gelingt die Beschreibung des Spektrums in den SSC-kritischen Bereichen sehr gut. Insbesondere können verschiedene Flusszustände modelliert und allein durch Veränderung der Maximalenergien von Protonen und Elektronen ineinander überführt werden. Diese einfache Möglichkeit der Modellierung der Variabilität der Quelle unterstreicht die Wahl des hadronischen Ansatzes. Somit wird hier ein sehr gutes Werkzeug zur Untersuchung der Emissionsprozesse in Blazaren geliefert. Darüber hinaus ist mit der Abschätzung des Neutrino-Flusses zwar die Detektion von 3C 279 als Punktquelle mit IceCube unwahrscheinlich, jedoch liefert das Modell generell die Möglichkeit im Kontext des Multimessenger-Ansatzes Antworten zu liefern. Im gleichen Kontext wird auch der Beitrag zur kosmischen Strahlung durch entweichende Neutronen untersucht.
Indirect Search for Dark Matter in the Universe - the Multiwavelength and Multiobject Approach
(2011)
Cold dark matter constitutes a basic tenet of modern cosmology, essential for our understanding of structure formation in the Universe. Since its first discovery by means of spectroscopic observations of the dynamics of the Coma cluster some 80 years ago, mounting evidence of its gravitational pull and its impact on the geometry of space-time has build up across a wide range of scales, from galaxies to the entire Hubble flow. The apparent lack of electromagnetic coupling and independent measurements of the energy density of baryonic matter from the primordial abundances of light elements show the non-baryonic nature of dark matter, and its clustering properties prove that it is cold, i.e. that it has a temperature lower than its mass during the time of radiation-matter equality. A generic particle candidate for cold dark matter are weakly interacting massive particles at the electroweak symmetry-breaking scale, such as the neutralinos in R-parity conserving supersymmetry. Such particles would naturally freeze-out with a cosmologically relevant relic density at early times in the expanding Universe. Subsequent clustering of matter would recover annihilation interactions between the dark matter particles to some extent and thus lead to potentially observable high-energy emission from the decaying unstable secondaries produced in annihilation events. The spectra of the secondaries would permit a determination of the mass and annihilation cross section, which are crucial for the microphysical identification of the dark matter. This the central motivation for indirect dark matter searches. However, presently neither the indirect searches, nor the complementary direct searches based on the detection of elastic scattering events, nor the production of candidate particles in collider experiments, has yet provided unequivocal evidence for dark matter. This does not come as a surprise, since the dark matter particles interact only through weak interactions and therefore the corresponding secondary emission must be extremely faint. It turns out that even for the strongest mass concentrations in the Universe, the dark matter annihilation signal is expected to not exceed the level of competing astrophysical sources. Thus, the discrimination of the putative dark matter annihilation signal from the signals of the astrophysical inventory has become crucial for indirect search strategies. In this thesis, a novel search strategy will be developed and exemplified in which target selection across a wide range of masses, astrophysical background estimation, and multiwavelength signatures play the key role. It turns out that the uncertainties regarding the halo profile and the boost due to surviving substructure are bigger for halos at the lower end of the observed mass scales, i.e. in the regime of dwarf galaxies and below, while astrophysical backgrounds tend to become more severe for massive dark matter halos such as clusters of galaxies. By contrast, the uncertainties due to unknown details of particle physics are invariant under changes of the halo mass. Therefore, the different scaling behaviors can be employed to significantly cut down on the uncertainties in observations of different targets covering a major part of the involved mass scales. This strategical approach was implemented in the scientific program carried out with the MAGIC telescope system. Observations of dwarf galaxies and the Virgo- and Perseus clusters of galaxies have been carried out and, at the time of writing, result in some of the most stringent constraints on weakly interacting massive particles from indirect searches. Here, the low-threshold design of the MAGIC telescope system plays a crucial role, since the bulk of the high-energy photons, produced with a high multiplicity during the fragmentation of unstable dark matter annihilation products, are emitted at energies well below the dark matter mass scale. The upper limits severely constrain less generic, but more prolific scenarios characterized by extraordinarily high annihilation efficiencies.
During the last decades the standard model of particle physics has evolved to one of the most precise theories in physics, describing the properties and interactions of fundamental particles in various experiments with a high accuracy. However it lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. R-parity is a discrete symmetry introduced to guarantee the stability of the proton. Using lepton number violating terms in the context of bilinear R-parity violation and the munuSSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. Since 2009 the Large Hadron Collider (LHC) at CERN explores the new energy regime of Tera-electronvolt, allowing the production of potentially existing heavy particles by the collision of protons. Thus the near future might provide answers to the open questions of mass generation in the standard model and show hints towards physics beyond the standard model. Therefore this thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and leptons in case of the R-parity violating models at one-loop level. The discussion shows the similarities and differences to existing calculations in another renormalization scheme, namely the DRbar scheme. Moreover we consider two-body decays of the form chi_j^0 -> chi_l^\pm W^\mp involving a heavy gauge boson in the final state at one-loop level. Corrections are found to be large in case of small or vanishing tree-level decay widths and also for the R-parity violating decay of the lightest neutralino chi_1^0 -> l^\pm W^\mp. An interesting feature of the models based on bilinear R-parity violation is the correlation between the branching ratios of the lightest neutralino decays and the neutrino mixing angles. We discuss these relations at tree-level and for two-body decays chi_1^0 -> l^\pm W^\mp also at one-loop level, since only the full one-loop corrections result in the tree-level expected behavior. The appendix describes the two programs MaCoR and CNNDecays being developed for the analysis carried out in this thesis. MaCoR allows for the calculation of mass matrices and couplings in the models under consideration and CNNDecays is used for the one-loop calculations of neutralino and chargino mass matrices and the two-body decay widths.
By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2.\\ These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass $\sff$ of the adjacent particles. In this thesis we first study the influence of higher than dimension 4 operators on spin determination in such decay chains. We write down the relevant dimension 5 and 6 operators and calculate their contributions to the invariant mass distribution. We discuss how they affect the determination of spin and couplings.\\ We then address two scenarios which do not involve decay chains in the usual sense. In three body decays, the method pointed out above cannot be applied since it can only be used if the mediating particle is produced on-shell. For off-shell decays, which are important e.g. in split-Supersymmetry or split-Universal Extra Dimensions, the narrow width approximation cannot be made which previously led to the simple relation between spin and the highest power of $\sff$. We work out a strategy for these three body decays that can distinguish between the different spin scenarios. The method relies on the fact that the differential decay width $d\Gamma /d\sff$ can be rewritten in this limit as a global phase space function and a polynomial in $\sff$. The coefficients in this polynomial are functions of masses and couplings and we show that they have distinct signs or ratios depending on the spins involved in the decay. We test the strategy in a series of Monte Carlo studies and discuss the influence of the intermediate particle's mass. In the last part we consider a topology with very short decay chains. Again one cannot use the relation between spin and invariant mass. We investigate one variable that has been invented for the discrimination of Supersymmetry and Universal Extra Dimensions in the high energy limit which reduces the problem to the underlying production process. We show how this variable can also be used in new physics scenarios where the high energy limit is not a viable approximation. We include all possible spin scenarios with renormalizable interactions and study in detail the influence of the involved masses and couplings on the discrimination power of this variable. We find for example that the scenario containing the supersymmetric case is well distinguishable from most other spin scenarios.
Over the past decades, noncommutative geometry has grown into an established field in pure mathematics and theoretical physics. The discovery that noncommutative geometry emerges as a limit of quantum gravity and string theory has provided strong motivations to search for physics beyond the standard model of particle physics and also beyond Einstein's theory of general relativity within the realm of noncommutative geometries. A very fruitful approach in the latter direction is due to Julius Wess and his group, which combines deformation quantization (star-products) with quantum group methods. The resulting gravity theory does not only include noncommutative effects of spacetime, but it is also invariant under a deformed Hopf algebra of diffeomorphisms, generalizing the principle of general covariance to the noncommutative setting. The purpose of the first part of this thesis is to understand symmetry reduction in noncommutative gravity, which then allows us to find exact solutions of the noncommutative Einstein equations. These are important investigations in order to capture the physical content of such theories and to make contact to applications in e.g. noncommutative cosmology and black hole physics. We propose an extension of the usual symmetry reduction procedure, which is frequently applied to the construction of exact solutions of Einstein's field equations, to noncommutative gravity and show that this leads to preferred choices of noncommutative deformations of a given symmetric system. We classify in the case of abelian Drinfel'd twists all consistent deformations of spatially flat Friedmann-Robertson-Walker cosmologies and of the Schwarzschild black hole. The deformed symmetry structure allows us to obtain exact solutions of the noncommutative Einstein equations in many of our models, for which the noncommutative metric field coincides with the classical one. In the second part we focus on quantum field theory on noncommutative curved spacetimes. We develop a new formalism by combining methods from the algebraic approach to quantum field theory with noncommutative differential geometry. The result is an algebra of observables for scalar quantum field theories on a large class of noncommutative curved spacetimes. A precise relation to the algebra of observables of the corresponding undeformed quantum field theory is established. We focus on explicit examples of deformed wave operators and find that there can be noncommutative corrections even on the level of free field theories, which is not the case in the simplest example of the Moyal-Weyl deformed Minkowski spacetime. The convergent deformation of simple toy-models is investigated and it is shown that these quantum field theories have many new features compared to formal deformation quantization. In addition to the expected nonlocality, we obtain that the relation between the deformed and the undeformed quantum field theory is affected in a nontrivial way, leading to an improved behavior of the noncommutative quantum field theory at short distances, i.e. in the ultraviolet. In the third part we develop elements of a more powerful, albeit more abstract, mathematical approach to noncommutative gravity. The goal is to better understand global aspects of homomorphisms between and connections on noncommutative vector bundles, which are fundamental objects in the mathematical description of noncommutative gravity. We prove that all homomorphisms and connections of the deformed theory can be obtained by applying a quantization isomorphism to undeformed homomorphisms and connections. The extension of homomorphisms and connections to tensor products of modules is clarified, and as a consequence we are able to add tensor fields of arbitrary type to the noncommutative gravity theory of Wess et al. As a nontrivial application of the new mathematical formalism we extend our studies of exact noncommutative gravity solutions to more general deformations.
Hochenergetische solare Teilchen werden bei ihrem Transport durch die Heliosphäre an turbulenten Magnetfeldern gestreut. Für das Verständnis dieses Streuprozesses ergeben sich aus heutiger Sicht zwei wesentliche Hindernisse: - Bei der Streuung hochenergetischer Teilchen an turbulenten Magnetfeldern handelt es sich um einen nichtlinearen Prozess, der durch analytische Theorien kaum zu beschreiben ist. - Der Streuprozess hängt stark von den tatsächlichen Magnetfeldern und somit auch von der Magnetfeldturbulenz ab. Unser bisheriges Verständnis der heliosphärischen Turbulenz ist leider aufgrund spärlicher experimenteller Daten deutlich eingeschränkt, was eine qualifizierte Umsetzung in analytischen und numerischen Ansätzen deutlich erschwert. Dies machte in der Vergangenheit künstliche Annahmen für die Modellerstellung notwendig. In dieser Arbeit wird der Teilchentransport mit Hilfe der Simulation von Testteilchen in einem turbulenten, magnetohydrodynamischen Plasma untersucht. Durch die Testteilchen werden auch die nichtlinearen Streuprozesse korrekt wiedergegeben, wodurch das erste hier genannte Hindernis überwunden wird. Dies wurde auch bereits in früheren numerischen Untersuchungen erfolgreich angewendet. Die Modellierung der Turbulenz für den Fall des Teilchentransports erfolgt in dieser Arbeit erstmalig auf Grundlage der magnetohydrodynamischen Gleichungen. Dabei handelt es sich um die mathematisch korrekte Wiedergabe der Magnetfeldturbulenz unterhalb der Ionen-Gyrofrequenz mit nur geringen numerischen Einschränkungen. Darüber hinaus erlaubt ein auf das physikalische Szenario anpassbarer Turbulenztreiber eine noch realistischere Simulation der Turbulenz. Durch diesen universell gültigen, numerischen Ansatz können für das zweite hier angegebene Hindernis jegliche künstlichen Annahmen vermieden werden. Die drei im Rahmen dieser Arbeit erstmals zusammengeführten Methoden (Testteilchen, magnetohydrodynamische Turbulenz, Turbulenztreiber) ermöglichen somit eine Untersuchung und Analyse von Transport- und Turbulenzphänomenen mit herausragender Qualität, die insbesondere für den Fall des Teilchentransports einen direkten Anschluss an experimentelle Ergebnisse ermöglichen. Wichtige Ergebnisse im Rahmen dieser Arbeit sind: - der Nachweis der Drei-Wellen-Wechselwirkung für schwache und einsetzende starke Turbulenz. - eine Analyse der Anisotropie der Turbulenz im Bezug auf das Hintergrundmagnetfeld in Abhängigkeit vom Treibmodell. Insbesondere die Anisotropie ist experimentell bislang kaum erfassbar. - eine Untersuchung der Auswirkung der Gyroresonanzen auf die Diffusionskoeffizienten hochenergetischer solarer Teilchen in allgemeiner Form. - die Simulation des Teilchentransports in der Heliosphäre auf Grundlage experimenteller Messdaten. Die genauere Analyse der Simulationsergebnisse ermöglicht insgesamt einen Zugang zum Verständnis des Transports, der durch experimentelle Untersuchungen nicht erfassbar ist. Bei der Simulation wurden lediglich die Magnetfeldstärke sowie die untersuchte Teilchenenergie vorgegeben. Aus der Analyse der Simulationsergebnisse ergibt sich dieselbe mittlere freie Weglänge, wie sie auch durch andere Verfahren direkt aus den Messergebnissen gewonnen werden konnte. Auch die vorwiegende Ausrichtung der hochenergetischen Teilchen parallel und antiparallel zum Hintergrundmagnetfeld in der Simulation entspricht experimentellen Untersuchungen. Es zeigt sich, dass diese allein aus den resonanten Streuprozessen der Teilchen mit den Magnetfeldern resultiert. Des Weiteren werden die Art der Diffusion, der Energieverlust der Teilchen während des Transportprozesses sowie die Gültigkeit der quasilinearen Theorie untersucht.
We consider the prospects for a neutrino factory measuring mixing angles, the CP violating phase and mass-squared differences by detecting wrong-charge muons arising from the chain $\mu^+\to\nu_e\to\nu_\mu\to\mu^-$ and the right-charge muons coming from the chain $\mu^+\to\bar{\nu}_\mu\to\bar{\nu}_\mu\to\mu^+$ (similar to $\mu^-$ chains), where $\nu_e\to\nu_\mu$ and $\bar{\nu}_\mu\to\bar{\nu}_\mu$ are neutrino oscillation channels through a long baseline. First, we study physics with near detectors and consider the treatment of systematic errors including cross section errors, flux errors, and background uncertainties. We illustrate for which measurements near detectors are required, discuss how many are needed, and what the role of the flux monitoring is. We demonstrate that near detectors are mandatory for the leading atmospheric parameter measurements if the neutrino factory has only one baseline, whereas systematic errors partially cancel if the neutrino factory complex includes the magic baseline. Second, we perform the baseline and energy optimization of the neutrino factory including the latest simulation results from the magnetized iron neutrino detector (MIND). We also consider the impact of $\tau$ decays, generated by appearance channels $\nu_\mu \rightarrow \nu_\tau$ and $\nu_e \rightarrow \nu_\tau$, on the discovery reaches of the mass orderings, the leptonic CP violation, and the non-zero $\theta_{13}$, which we find to be negligible for the considered detector. Third, we make a comparison of a high energy neutrino factory to a low energy neutrino factory and find that they are just two versions of the same experiment optimized for different regions of the parameter space. In addition, we briefly comment on whether it is useful to build the bi-magic baseline at the low energy neutrino factory. Finally, the effects of one additional massive sterile neutrino are discussed in the context of a combined short and long baseline setup. It is found that near detectors can provide the required sensitivity at the LSND-motivated $\Delta m_{41}^2$-range, while some sensitivity can also be obtained in the region of the atmospheric mass splitting introduced by the sterile neutrino from the long baselines.