Institut für Theoretische Physik und Astrophysik
Refine
Has Fulltext
- yes (264)
Is part of the Bibliography
- yes (264)
Year of publication
Document Type
- Doctoral Thesis (145)
- Journal article (117)
- Master Thesis (1)
- Other (1)
Keywords
- Monte-Carlo-Simulation (12)
- Supersymmetrie (12)
- Topologischer Isolator (12)
- topological insulators (12)
- Blazar (10)
- LHC (9)
- AdS-CFT-Korrespondenz (8)
- Aktiver galaktischer Kern (8)
- Astrophysik (8)
- Elementarteilchenphysik (8)
Institute
Sonstige beteiligte Institutionen
Die Emission solarer Typ II Radiobursts ist ein seit Jahrzehnten beobachtetes Phänomen der heliosphärischen Plasmaphysik. Diese Radiobursts, die im Zusammenhang mit der Propagation koronaler Schockfronten auftreten, zeigen ein charakteristisches, zweibandiges Emissionsspektrum. Mit expandierendem Schock driften sie zu niedrigeren Frequenzen. Analytische Theorien dieser Emission sagen nichtlineare Plasmawellenwechselwirkung als Ursache voraus, doch aufgrund des geringen Sonnenabstands der Emissionsregion ist die in-situ Datenlage durch Satellitenmessungen äusserst schlecht, so dass eine endgültige Verifikation der vorhergesagten Vorgänge bisher nicht möglich war. Mit Hilfe eines kinetischen Plasma-Simulationscodes nach dem Particle-in-Cell Prinzip wurde in dieser Dissertation die Plasmaumgebung in der Foreshock-Region einer koronalen Schockfront modelliert. Das Propagations- und Kopplungsverhalten elektrostatischer und elektromagnetischer Wellenmoden wurde untersucht. Die vollständige räumliche Information über die Wellenzusammensetzung in der Simulation erlaubt es, die Kinematik nichtlinearer Wellenkopplungen genauestens zu untersuchen. Es zeigte sich ein mit der analytischen Theorie der Drei-Wellen-Wechselwirkung konsistentes Bild der Erzeugung solarer Radiobursts: durch elektromagnetischen Zerfall elektrostatischer Moden kommt es zur Erzeugung fundamentaler, sowie durch Verschmelzung gegenpropagierender elektrostatischer Moden zur Anregung harmonischer Radioemission. Kopplungsstärken und Winkelabhängigkeit dieser Prozesse wurden untersucht. Mit dem somit zur Verfügung stehenden, numerischen Laborsystem wurde die Parameter-Abhängigkeit der Wellenkopplungen und entstehenden Radioemissionen bezüglich Stärke des Elektronenbeams und des solaren Abstandes untersucht.
In memoriam Karl Kraus
(2012)
By the end of the year 2011, both the CMS and ATLAS experiments at the Large Hadron Collider have recorded around 5 inverse femtobarns of data at an energy of 7 TeV. There are only vague hints from the already analysed data towards new physics at the TeV scale. However, one knows that around this scale, new physics should show up so that theoretical issues of the standard model of particle physics can be cured. During the last decades, extensions to the standard model that are supposed to solve its problems have been constructed, and the corresponding phenomenology has been worked out. As soon as new physics is discovered, one has to deal with the problem of determining the nature of the underlying model. A first hint is of course given by the mass spectrum and quantum numbers such as electric and colour charges of the new particles. However, there are two popular model classes, supersymmetric models and extradimensional models, which can exhibit almost equal properties at the accessible energy range. Both introduce partners to the standard model particles with the same charges and thus one needs an extended discrimination method. From the origin of these partners arises a relevant difference: The partners constructed in extradimensional models have the same spin as their standard model partners while in Supersymmetry they differ by spin 1/2.\\ These different spins have an impact on the phenomenology of the two models. For example, one can exploit the fact that the total cross sections are affected, but this requires a very good knowledge of the couplings and masses involved. Another approach uses angular distributions depending on the particle spins. A prevailing method based on this idea uses the invariant mass distribution of the visible particles in decay chains. One can relate these distributions to the spin of the particle mediating the decay since it reflects itself in the highest power of the invariant mass $\sff$ of the adjacent particles. In this thesis we first study the influence of higher than dimension 4 operators on spin determination in such decay chains. We write down the relevant dimension 5 and 6 operators and calculate their contributions to the invariant mass distribution. We discuss how they affect the determination of spin and couplings.\\ We then address two scenarios which do not involve decay chains in the usual sense. In three body decays, the method pointed out above cannot be applied since it can only be used if the mediating particle is produced on-shell. For off-shell decays, which are important e.g. in split-Supersymmetry or split-Universal Extra Dimensions, the narrow width approximation cannot be made which previously led to the simple relation between spin and the highest power of $\sff$. We work out a strategy for these three body decays that can distinguish between the different spin scenarios. The method relies on the fact that the differential decay width $d\Gamma /d\sff$ can be rewritten in this limit as a global phase space function and a polynomial in $\sff$. The coefficients in this polynomial are functions of masses and couplings and we show that they have distinct signs or ratios depending on the spins involved in the decay. We test the strategy in a series of Monte Carlo studies and discuss the influence of the intermediate particle's mass. In the last part we consider a topology with very short decay chains. Again one cannot use the relation between spin and invariant mass. We investigate one variable that has been invented for the discrimination of Supersymmetry and Universal Extra Dimensions in the high energy limit which reduces the problem to the underlying production process. We show how this variable can also be used in new physics scenarios where the high energy limit is not a viable approximation. We include all possible spin scenarios with renormalizable interactions and study in detail the influence of the involved masses and couplings on the discrimination power of this variable. We find for example that the scenario containing the supersymmetric case is well distinguishable from most other spin scenarios.
Die vorliegende Arbeit beschäftigt sich mit der Abstrahlung von Aktiven Galaxienkernen. Das erste Maximum der charakteristischen Doppelpeakstruktur des $\nu F_{\nu}$-Spektrums vom Blazaren ist zweifelsfrei Synchrotronstrahlung hochenergetischer Elektronen innerhalb des relativistischen Ausflusses des zugrundeliegenden Aktiven Galaxienkerns. Die zum zweiten (hochenergetischen) Maximum beitragenden Strahlungsprozesse und Teilchenspezies hingegen sind Gegenstand aktueller Diskussionen. In dieser Arbeit wir ein vollständig selbstkonsistentes und zeitabhängiges hybrides Emissionsmodell, welches auch Teilchenbeschleunigung berücksichtigt, entwickelt und auf verschiedene Blazar-Typen entlang der Blazar-Sequenz, von BL Lac Objekten mit verschiedenen Peakfrequenzen bis hin zu Flachspektrum-Radioquasaren, angewendet. Die spektrale Emission ersterer kann gut im rein leptonischen Grenzfall, d.h. der zweite $\nu F_{\nu}$-Peak kommt durch invers Compton-gestreute Synchrotronphotonen der abstrahlenden Elektronen selbst zustande, beschrieben werden. Zur Beschreibung letzterer muss man nicht-thermische Protonen innerhalb des Jets zulassen um die Dominanz des zweiten Maximums im Spektrum konsistent zu erklären. In diesem Fall besteht der zweite Peak aus Protonensynchrotronstrahlung und Kaskadenstrahlung der photohadronischen Prozesse. Mit dem entwickelten Modell ist es möglich auch die zeitliche Information, welche durch Ausbrüche von Blazaren bereitgestellt wird, auszunutzen um zum einen die freien Modellparameter weiter einzuschränken und -viel wichtiger- zum anderen leptonisch dominierte Blazare von hadronischen zu unterscheiden. Hierzu werden die typischen Zeitunterschiede in den Interbandlichtkurven als hadronischer Fingerabdruck benutzt.\\ Mit einer Stichprobe von 16 Spektren von zehn Blazaren entlang der Blazar-Sequenz, welche in unterschiedlichen Flusszuständen und mit starker Variabilität beobachtet wurden, ist es möglich die wichtigsten offenen Fragen der Physik relativistischer Ausbrüche in systematischer Art und Weise zu adressieren. Anhand der modellierten Ausbrüche kann man erkennen, dass sechs Quellen rein leptonisch dominiert sind, aber vier Protonen bis auf $\gamma \approx 10^{11}$ beschleunigen, was Auswirkungen auf die möglichen Quellen extragalaktischer kosmischer Strahlung unter den Blazaren hat. Darüber hinaus findet sich eine Abhängigkeit zwischen dem Magnetfeld der Emissionsregion und der injizierten Leuchtkraft, welche unabhängig von den zugrunde liegenden Teilchenpopulationen Gültigkeit besitzt. In diesem Zusammenhang lässt sich die Blazar-Sequenz als ein evolutionäres Szenario erklären: die Sequenz $FSRQ \rightarrow LBL/IBL \rightarrow HBL$ kommt aufgrund abnehmender Gasdichte der Hostgalaxie und damit einhergehender abnehmender Akkretionsrate zustande, dies wird durch weitere kosmologische Beobachtungen bestätigt. Eine abnehmende Materiedichte innerhalb des relativistischen Ausflusses wird von einem abnehmenden Magnetfeld begleitet, d.h. aber auch, dass Protonen weit vor den Elektronen nicht mehr im Strahlungsgebiet gehalten werden können. Die Blazar-Sequenz ist also ein Maß für die Hadronizität des Jets. Dies erklärt zudem die Dichotomie von FSRQs und BL Lac Objekten sowie die Zweiteilung in anderen Erscheinungsformen von AGN, z.B. FR-I und FR-II Radiogalaxien.\\ Während der Modellierung wird gezeigt, dass man Blazar-Spektren, speziell im hadronischen Fall, nicht mehr statisch betrachten kann, da es zu kumulierten Effekten aufgrund der langen Protonensynchrotronzeitskala kommt. Die niedrige Luminosität der Quellen und unterschiedlich lange Beobachtungszeiten verschiedener Experimente verlangen bei variablen Blazaren auch im leptonischen Fall eine zeitabhängige Betrachtung. Die Kurzzeitvariabilität scheint bei einzelnen Blazaren stets die selbe Ursache zu haben, unterscheidet sich aber bei der Betrachtung verschiedener Quellen. Zusätzlich wird für jeden Blazar, der in verschiedenen Flusszuständen beobachtet werden konnte, der Unterschied zwischen Lang- und Kurzzeitvariabilität, auch im Hinblick auf einen möglichen globalen Grundzustand hin, betrachtet.
It is widely believed that the modular organization of cellular function is reflected in a modular structure of molecular networks. A common view is that a ‘‘module’’ in a network is a cohesively linked group of nodes, densely connected internally and sparsely interacting with the rest of the network. Many algorithms try to identify functional modules in protein-interaction networks (PIN) by searching for such cohesive groups of proteins. Here, we present an alternative approach independent of any prior definition of what actually constitutes a ‘‘module’’. In a self-consistent manner, proteins are grouped into ‘‘functional roles’’ if they interact in similar ways with other proteins according to their functional roles. Such grouping may well result in cohesive modules again, but only if the network structure actually supports this. We applied our method to the PIN from the Human Protein Reference Database (HPRD) and found that a representation of the network in terms of cohesive modules, at least on a global scale, does not optimally represent the network’s structure because it focuses on finding independent groups of proteins. In contrast, a decomposition into functional roles is able to depict the structure much better as it also takes into account the interdependencies between roles and even allows groupings based on the absence of interactions between proteins in the same functional role. This, for example, is the case for transmembrane proteins, which could never be recognized as a cohesive group of nodes in a PIN. When mapping experimental methods onto the groups, we identified profound differences in the coverage suggesting that our method is able to capture experimental bias in the data, too. For example yeast-two-hybrid data were highly overrepresented in one particular group. Thus, there is more structure in protein-interaction networks than cohesive modules alone and we believe this finding can significantly improve automated function prediction algorithms.
Indirect Search for Dark Matter in the Universe - the Multiwavelength and Multiobject Approach
(2011)
Cold dark matter constitutes a basic tenet of modern cosmology, essential for our understanding of structure formation in the Universe. Since its first discovery by means of spectroscopic observations of the dynamics of the Coma cluster some 80 years ago, mounting evidence of its gravitational pull and its impact on the geometry of space-time has build up across a wide range of scales, from galaxies to the entire Hubble flow. The apparent lack of electromagnetic coupling and independent measurements of the energy density of baryonic matter from the primordial abundances of light elements show the non-baryonic nature of dark matter, and its clustering properties prove that it is cold, i.e. that it has a temperature lower than its mass during the time of radiation-matter equality. A generic particle candidate for cold dark matter are weakly interacting massive particles at the electroweak symmetry-breaking scale, such as the neutralinos in R-parity conserving supersymmetry. Such particles would naturally freeze-out with a cosmologically relevant relic density at early times in the expanding Universe. Subsequent clustering of matter would recover annihilation interactions between the dark matter particles to some extent and thus lead to potentially observable high-energy emission from the decaying unstable secondaries produced in annihilation events. The spectra of the secondaries would permit a determination of the mass and annihilation cross section, which are crucial for the microphysical identification of the dark matter. This the central motivation for indirect dark matter searches. However, presently neither the indirect searches, nor the complementary direct searches based on the detection of elastic scattering events, nor the production of candidate particles in collider experiments, has yet provided unequivocal evidence for dark matter. This does not come as a surprise, since the dark matter particles interact only through weak interactions and therefore the corresponding secondary emission must be extremely faint. It turns out that even for the strongest mass concentrations in the Universe, the dark matter annihilation signal is expected to not exceed the level of competing astrophysical sources. Thus, the discrimination of the putative dark matter annihilation signal from the signals of the astrophysical inventory has become crucial for indirect search strategies. In this thesis, a novel search strategy will be developed and exemplified in which target selection across a wide range of masses, astrophysical background estimation, and multiwavelength signatures play the key role. It turns out that the uncertainties regarding the halo profile and the boost due to surviving substructure are bigger for halos at the lower end of the observed mass scales, i.e. in the regime of dwarf galaxies and below, while astrophysical backgrounds tend to become more severe for massive dark matter halos such as clusters of galaxies. By contrast, the uncertainties due to unknown details of particle physics are invariant under changes of the halo mass. Therefore, the different scaling behaviors can be employed to significantly cut down on the uncertainties in observations of different targets covering a major part of the involved mass scales. This strategical approach was implemented in the scientific program carried out with the MAGIC telescope system. Observations of dwarf galaxies and the Virgo- and Perseus clusters of galaxies have been carried out and, at the time of writing, result in some of the most stringent constraints on weakly interacting massive particles from indirect searches. Here, the low-threshold design of the MAGIC telescope system plays a crucial role, since the bulk of the high-energy photons, produced with a high multiplicity during the fragmentation of unstable dark matter annihilation products, are emitted at energies well below the dark matter mass scale. The upper limits severely constrain less generic, but more prolific scenarios characterized by extraordinarily high annihilation efficiencies.
During the last decades the standard model of particle physics has evolved to one of the most precise theories in physics, describing the properties and interactions of fundamental particles in various experiments with a high accuracy. However it lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. R-parity is a discrete symmetry introduced to guarantee the stability of the proton. Using lepton number violating terms in the context of bilinear R-parity violation and the munuSSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. Since 2009 the Large Hadron Collider (LHC) at CERN explores the new energy regime of Tera-electronvolt, allowing the production of potentially existing heavy particles by the collision of protons. Thus the near future might provide answers to the open questions of mass generation in the standard model and show hints towards physics beyond the standard model. Therefore this thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and leptons in case of the R-parity violating models at one-loop level. The discussion shows the similarities and differences to existing calculations in another renormalization scheme, namely the DRbar scheme. Moreover we consider two-body decays of the form chi_j^0 -> chi_l^\pm W^\mp involving a heavy gauge boson in the final state at one-loop level. Corrections are found to be large in case of small or vanishing tree-level decay widths and also for the R-parity violating decay of the lightest neutralino chi_1^0 -> l^\pm W^\mp. An interesting feature of the models based on bilinear R-parity violation is the correlation between the branching ratios of the lightest neutralino decays and the neutrino mixing angles. We discuss these relations at tree-level and for two-body decays chi_1^0 -> l^\pm W^\mp also at one-loop level, since only the full one-loop corrections result in the tree-level expected behavior. The appendix describes the two programs MaCoR and CNNDecays being developed for the analysis carried out in this thesis. MaCoR allows for the calculation of mass matrices and couplings in the models under consideration and CNNDecays is used for the one-loop calculations of neutralino and chargino mass matrices and the two-body decay widths.
At the present day the idea of cosmological inflation constitutes an important extension of Big Bang theory. Since its appearance in the early 1980’s many physical mechanisms have been worked out that put the inflationary expansion of space that proceeds the Hot Big Bang on a sound theoretical basis. Among the achievements of the theory of inflation are the explanaition of the almost Euclidean geometry of ‘visible’space, the homogeneity of the cosmic background radiation but, in particular, also the tiny inhomogeneity of a relative amplitude of 10−5. In many models of inflation the inflationary phase ends only locally. Hence, there exists the possibility that the inflationary process still goes on in regions beyond our visual horizon. This property is commonly termed ‘eternal inflation’. In the framework of a cosmological scalar fields, eternal inflation can manifest itself in a variety of ways. On the one hand fluctuations of the field, if sufficiently large, can work against the classical trajectory and therefore counteract the end of inflation. In regions where this is the case the accelerated expansion of space continues at a higher rate. In parts of this region the process may replicate itself again and in this way may continue throughout all of time. Space and field are said to reproduce themselves. On the other hand, a mechanism that can occur in addition or independent of the latter, is so called vacuum tunneling. If the potential of the scalar field has several local minima, a semi-classical calculation suggests that within a spherical region, a bubble, the field can tunnel to another state. The respective tunneling rates depend on the potential difference and the shape of the potential between the states. Generally, the tunneling rate is exponentially suppressed, which means that the inflation lasts for a long time before tunneling takes place. The ongoing inflationary process effectively reduces local curvature, anistotropy and inhomogeneity, so that this property is known as the ‘cosmic no-hair conjecture’. For this reason cosmological considerations of the evolution of bubbles thus far almost entirely involved vacuum (de Sitter) backgrounds. However, new insights in the framework of string theory suggest high tunneling rates which allow for the possibility of bubble nucleation in non-vacuum dominated backgrounds. In this case the evolution of the bubble depends on the properties of the background spacetime. A deeper introduction in chapter 4 is followed by the presentation of the Lemaître-Tolman spacetime in chapter 5 which constitutes the background spacetime in the study of the effect of matter and inhomogeneity on the evolution of vacuum bubbles. In chapter 6 we explicitly describe the application of the ‘thin-shell’ formalism and the resulting system of equations. This is succeeded in chapter 7 by the detailed analysis of bubble evolution in various limits of the Lemaître-Tolman spacetime and a Robertson-Walker spacetime with a rapid phase transition. The central observations are that the presence of dust, at a fixed surface energy density, goes along with a smaller nucleation volume and possibly leads to a a collapse of the bubble. In an expanding background, the radially inhomogeneous dust profile is efficiently diluted so that there is essentially no effect on the evolution of the domain wall. This changes in a radially inhomogeneous curvature profile, positive curvature decelerates the expansion of the bubble. Moreover, we point out that the adopted approach does not allow for a treatment of a, physically expected, matter transfer so that the results are to be understood as preliminary under this caveat. In the second part of this thesis we consider potential observable consequences of bubble collisions in the cosmic microwave background radiation. The topological nature of the signal suggests the use of statistics that are well suited to quantify the morphological properties of the temperature fluctuations. In chapter 10 we present Minkowski Functionals (MFs) that exactly provide such statistics. The presented error analysis allows for a higher precision of numerical MFs in comparison to earlier methods. In chapter 12 we present the application of our algorithm to a Gaussian and a collision map. We motivate the expected MFs and extract their numerical counterparts. We find that our least-squares fitting procedure accurately reproduces an underlying signal only when a large number of realizations of maps are averaged over, while for a single WMAP and PLANCK resolution map, only when a highly prominent disk, with |δT| = 2√σG and ϑd = 40◦, we are able to recover the result. This is unfortunate, as it means that MF are intrinsically too noisy to be able to distinguish cold and hot spots in the CMB for small sizes.
We consider the prospects for a neutrino factory measuring mixing angles, the CP violating phase and mass-squared differences by detecting wrong-charge muons arising from the chain $\mu^+\to\nu_e\to\nu_\mu\to\mu^-$ and the right-charge muons coming from the chain $\mu^+\to\bar{\nu}_\mu\to\bar{\nu}_\mu\to\mu^+$ (similar to $\mu^-$ chains), where $\nu_e\to\nu_\mu$ and $\bar{\nu}_\mu\to\bar{\nu}_\mu$ are neutrino oscillation channels through a long baseline. First, we study physics with near detectors and consider the treatment of systematic errors including cross section errors, flux errors, and background uncertainties. We illustrate for which measurements near detectors are required, discuss how many are needed, and what the role of the flux monitoring is. We demonstrate that near detectors are mandatory for the leading atmospheric parameter measurements if the neutrino factory has only one baseline, whereas systematic errors partially cancel if the neutrino factory complex includes the magic baseline. Second, we perform the baseline and energy optimization of the neutrino factory including the latest simulation results from the magnetized iron neutrino detector (MIND). We also consider the impact of $\tau$ decays, generated by appearance channels $\nu_\mu \rightarrow \nu_\tau$ and $\nu_e \rightarrow \nu_\tau$, on the discovery reaches of the mass orderings, the leptonic CP violation, and the non-zero $\theta_{13}$, which we find to be negligible for the considered detector. Third, we make a comparison of a high energy neutrino factory to a low energy neutrino factory and find that they are just two versions of the same experiment optimized for different regions of the parameter space. In addition, we briefly comment on whether it is useful to build the bi-magic baseline at the low energy neutrino factory. Finally, the effects of one additional massive sterile neutrino are discussed in the context of a combined short and long baseline setup. It is found that near detectors can provide the required sensitivity at the LSND-motivated $\Delta m_{41}^2$-range, while some sensitivity can also be obtained in the region of the atmospheric mass splitting introduced by the sterile neutrino from the long baselines.
The idea that our observable Universe may have originated from a quantum tunneling event out of an eternally inflating false vacuum state is a cornerstone of the multiverse paradigm. Modern theories that are considered as an approach towards the ultraviolet-complete fundamental theory of particles and gravity, such as the various types of string theory, even suggest that a vast landscape of different vacuum configurations exists, and that gravitational tunneling is an important mechanism with which the Universe can explore this landscape. The tunneling scenario also presents a unique framework to address the initial conditions of our observable Universe. In particular, it allows to introduce deviations from the cosmological concordance model in a controlled and well-motivated way. These deviations are a central topic of this work. An important feature in most of the theories mentioned above is the presumed existence of additional space dimensions in excess of the three which we observe in our every-day experience. It was realized that these extra dimensions could avoid our detection if they are compactified to microscopic length scales far beyond the reach of current experiments. There also seem to be natural mechanisms available for dynamical compactification in those theories. These typically lead to a vast landscape of different vacuum configurations which also may differ in the number of macroscopic dimensions, only the total number of dimensions being determined by the theory. Transitions between these vacuum configurations may hence open up new directions which were previously compact, spontaneously compactify some previously macroscopic directions, or otherwise re-arrange the configuration of compact and macroscopic dimensions in a more general way. From within the bubble Universe, such a process may be perceived as an anisotropic background spacetime - intuitively, the dimensions which open up may give rise to preferred directions. If our 3+1 dimensional observable Universe was born in a process as described above, one may expect to find traces of a preferred direction in cosmological observations. For instance, two directions could be curved like on a sphere, while the third space direction is flat. Using a scenario of gravitational tunneling to fix the initial conditions, I show how the primordial signatures in such an anisotropic Universe can be obtained in principle and work out a particular example in more detail. A small deviation from isotropy also has phenomenological consequences for the later evolution of the Universe. I discuss the most important effects and show that backreaction can be dynamically important. In particular, under certain conditions, a buildup of anisotropic stress in different components of the cosmic fluid can lead to a dynamical isotropization of the total stress-energy tensor. The mechanism is again demonstrated with the help of a physical example.