Refine
Has Fulltext
- yes (21)
Is part of the Bibliography
- yes (21)
Year of publication
- 2016 (21) (remove)
Document Type
- Journal article (16)
- Doctoral Thesis (5)
Language
- English (21) (remove)
Keywords
- LHC (3)
- NLO Computations (3)
- Radio astronomy (2)
- Radioastronomie (2)
- neutrino astronomy (2)
- 2-loop level (1)
- ANTARES telescope (1)
- Active Galaxies (1)
- Active galactic nucleus (1)
- Aktive Galaxie (1)
Institute
- Institut für Theoretische Physik und Astrophysik (21) (remove)
Active galactic nuclei (AGNs) are among the brightest sources in our universe. These galaxies are considered active because their central region is brighter than the luminosities of all stars in a galxies can provide. In their center is a supermassive black hole (SMBH) surrounded by an accretion disk and further out a dusty torus. AGN can be found with emission over the whole electromagnetic spectrum, starting at radio frequencies over optical and X-ray emission up to the $\gamma$-rays. Not all of these sources are detected in each frequency regime. In this work mainly blazars are examined at low radio frequencies. Blazars are a subclass of radio-loud AGN. These radio-loud sources usually exhibit highly collimated jets perpendicular to the accretion disk. For blazars these jets are pointed in the direction of the observer and their emission is highly variable. \\
AGN are classified in different subclasses based on their morphology. These different subclasses are combined in the AGN unification model, which explains the different morphologies by having sources only varying in their luminosities and their angle to the line of sight to the observer. Blazars are these targets, where the jet is pointing towards the observer, while the AGN observed edge on are called radio galaxies. This means that blazars should be the counterparts to radio galaxies seen from a different angle. Testing this is one of the goals in this work. \\
After the discovery of AGN in the 1940s these objects have been studied at all wavelengths. With the development of interferometry with radio telescopes the angular resolution for radio observations could be improved. In the last 20 years many AGN are regularly monitored. One of these monitoring programs is the MOJAVE program, monitoring 274 AGNs with using the Very Long Baseline Interferometry (VLBI) technique. The monitoring provides information on the evolution and structure of AGN and their jets. However, the mechanisms of the jet formation and their collimation are not fully understood. Due to relativistic effects it is difficult to obtain intrinsic instead of apparent parameters of these jets. One approach to get closer to the intrinsic jet power is by observing the regions, in which the jets end and interact with the intergalactic medium. Observations at lower radio frequencies are more sensitive for extended diffuse emission. \\
Since December 2012 a new radio telescope for low frequencies is observing. It is a telescope with stations consisting of dipole antennas. The major part of the array located in the Netherlands (38 stations) with 12 additional international stations in Germany, France, Sweden, Poland and the United Kingdom. This instrument is called the Low Frequency Array (LOFAR). LOFAR offers the possibility to observe at frequencies between 30--250 MHz in combination with angular resolution (below 1 arcsec for the full array), which was not available with previous telescopes. \\
In this work results of blazar studies with LOFAR observations are presented. To take advantage of a large database with multi-wavelength observations and kinematic studies the MOJAVE 1.5 Jy flux limited sample was chosen. Based on the preliminary results of the LOFAR Multifrequency Snapshot Sky Survey (MSSS) the flux densities and spectral indices of blazars of the MOJAVE sample are examined. 125 counterparts of MOJAVE blazars were found in the MSSS catalog. Since the MSSS observations only contain the stations in the Netherlands and observes in snapshots, the angular resolution and the sensitivity is limited. The first MSSS catalog was produced with an angular resolution of $\sim$120 arcsec and a sensitivity of $\sim$50--100 mJy. Another advantage of the MOJAVE sample is the monitoring of these sources with the Owens Valley Radio Observatory (OVRO) at 15 GHz to produce radio lightcurves. With these observations it is possible to get quasi-simultaneous flux densities at 15 GHz for the corresponding MSSS observations. By having quasi-simultaneous observations the variability of the blazars affects the flux densities less than with the use of archival data. The spectral indices obtained by the combination of MSSS and OVRO flux densities can be used to estimate the contribution of the diffuse extended emission for these AGNs. \\
Comparing the MSSS catalog with the OVRO data points, the flux densities have a tendency to be higher at low frequencies. This is expected due to the higher contribution of extended emission. The broadband spectral index distribution shows a peak at $\sim-0.2$. While some sources seem to have steeper spectral indices meaning that extended emission contributes a large fraction of the total flux density, more than the half of the sample shows flat spectral indices. The flat spectral indices show that the total flux densities of these sources are dominated by their relativistic beamed emission regions, which is the same for the observations at GHz frequencies. \\
To obtain more detailed images of these sources the MSSS measurement sets including sources of the sample were reprocessed to improve the angular resolution to $\sim$30 arcsec. The higher angular resolution reveals extended diffuse emission of several blazars. Since the reimaging results were not fully calibrated only the morphology at this resolution could be examined. However, with the short snapshot observations the images obtained with this strategy are affected from artifacts. The reimaging could be successfully performed for 93 sources in one frequency band. For 45 of these sources all availabe frequency bands could be reprocessed and used to created averaged images. These images are presented in this work. As a results of the reimaging process a pilot sample was defined to observe targets with diffuse extended emission using the whole LOFAR array including the international stations. \\
The second part of this work presents the results of a pilot sample consisting of four blazars observed with the LOFAR international array. Since the calibration of this kind of LOFAR observation is still in development, the main focus was the description of the used calibration strategy. The calibration strategies still has some limitation but resulted in images with angular resolutions of less than 1 arcsec. The morphology of all four blazars show features confirming the expectations of their counterpart radio galaxies. With the flux densities of the extended emission found in these brightness distributions the extended radio luminosities are calculated. Comparing these to the radio galaxy classifications also confirm the expectations from the unification model. \\
By extending the sample of observed blazars with LOFAR international in future the calibration strategy can be used to create similar high resolution images. A larger sample can be used to test the unification model with statistical significant results. \\
Classical novae are thermonuclear explosions occurring on the surface of white dwarfs.
When co-existing in a binary system with a main sequence or more evolved star, mass
accretion from the companion star to the white dwarf can take place if the companion
overflows its Roche lobe. The envelope of hydrogen-rich matter which builds on
top of the white dwarf eventually ignites under degenerate conditions, leading to
a thermonuclear runaway and an explosion in the order of 1046 erg, while leaving
the white dwarf intact. Spectral analyses from the debris indicate an abundance of
isotopes that are tracers of nuclear burning via the hot CNO cycle, which in turn
reveal some sort of mixing between the envelope and the white dwarf underneath.
The exact mechanism is still a matter of debate.
The convection and deflagration in novae develop in the low Mach number regime.
We used the Seven League Hydro code (SLH ), which employs numerical schemes
designed to correctly simulate low Mach number flows, to perform two and three-
dimensional simulations of classical novae. Based on a spherically-symmetric model
created with aid of a stellar evolution code, we developed our own nova model and
tested it on a variety of numerical grids and boundary conditions for validation. We
focused on the evolution of temperature, density and nuclear energy generation rate at
the layers between white dwarf and envelope, where most of the energy is generated,
to understand the structure of the transition region, and its effect on the nuclear
burning. We analyzed the resulting dredge-up efficiency stemming from the convective
motions in the envelope. Our models yield similar results to the literature, but seem
to depend very strongly on the numerical resolution. We followed the evolution of
the nuclear species involved in the CNO cycle and concluded that the thermonuclear
reactions primarily taking place are those of the cold and not the hot CNO cycle.
The reason behind this could be that under the conditions generally assumed for
multi-dimensional simulations, the envelope is in fact not degenerate. We performed
initial tests for 3D simulations and realized that alternative boundary conditions are
needed.
Next-to-leading-order electroweak corrections to pp -> W\(^{+}\)W\(^{-}\) -> 4 leptons at the LHC
(2016)
We present results of the first calculation of next-to-leading-order electroweak corrections to W-boson pair production at the LHC that fully takes into account leptonic W-boson decays and off-shell effects. Employing realistic event selections, we discuss the corrections in situations that are typical for the study of W-boson pairs as a signal process or of Higgs-boson decays H → WW∗, to which W-boson pair production represents an irreducible background. In particular, we compare the full off-shell results, obtained treating the W-boson resonances in the complex-mass scheme, to previous results in the so-called double-pole approximation, which is based on an expansion of the loop amplitudes about the W resonance poles. At small and intermediate scales, i.e. in particular in angular and rapidity distributions, the two approaches show the expected agreement at the level of fractions of a percent, but larger differences appear in the TeV range. For transverse-momentum distributions, the differences can even exceed the 10% level in the TeV range where “background diagrams” with one instead of two resonant W bosons gain in importance because of recoil effects.
NLO electroweak corrections to off-shell top-antitop production with leptonic decays at the LHC
(2016)
For the first time the next-to-leading-order electroweak corrections to the full off-shell production of two top quarks that decay leptonically are presented. This calculation includes all off-shell, non-resonant, and interference effects for the 6-particle phase space. While the electroweak corrections are below one per cent for the integrated cross section, they reach up to 15% in the high-transverse-momentum region of distributions. To support the results of the complete one-loop calculation, we have in addition evaluated the electroweak corrections in two different pole approximations, one requiring two on-shell top quarks and one featuring two on-shell W bosons. While the former deviates by up to 10% from the full calculation for certain distributions, the latter provides a very good description for most observables. The increased centre-of-mass energy of the LHC makes the inclusion of electroweak corrections extremely relevant as they are particularly large in the Sudakov regime where new physics is expected to be probed.
The next-to-leading-order electroweak corrections to pp→l\(^{+}\)l\(^{-}\)/ν¯¯¯ν+γ+X production, including all off-shell effects of intermediate Z bosons in the complex-mass scheme, are calculated for LHC energies, revealing the typically expected large corrections of tens of percent in the TeV range. Contributions from quark-photon and photon-photon initial states are taken into account as well, but their impact is found to be moderate or small. Moreover, the known next-to-leading-order QCD corrections are reproduced. In order to separate hard photons from jets, both a quark-to-photon fragmentation function á la Glover/Morgan and Frixione’s cone isolation are employed. The calculation is available in the form of Monte Carlo programs allowing for the evaluation of arbitrary differential cross sections. Predictions for integrated cross sections are presented for the LHC at 7 TeV, 8 TeV, and 14 TeV, and differential distributions are discussed at 14 TeV for bare muons and dressed leptons. Finally, we consider the impact of anomalous ZZγ and Zγγ couplings.
We present evidence for the existence of a hybrid state of Tamm plasmons and microcavity exciton polaritons in a II-VI material based microcavity sample covered with an Ag metal layer. The bare cavity mode shows a characteristic anticrossing with the Tamm-plasmon mode, when microreflectivity measurements are performed for different detunings between the Tamm plasmon and the cavity mode. When the Tamm-plasmon mode is in resonance with the cavity polariton four hybrid eigenstates are observed due to the coupling of the cavity-photon mode, the Tamm-plasmon mode, and the heavy- and light-hole excitons. If the bare Tamm-plasmon mode is tuned, these resonances will exhibit three anticrossings. Experimental results are in good agreement with calculations based on the transfer matrix method as well as on the coupled-oscillators model. The lowest hybrid eigenstate is observed to be red shifted by about 13 meV with respect to the lower cavity polariton state when the Tamm plasmon is resonantly coupled with the cavity polariton. This spectral shift which is caused by the metal layer can be used to create a trapping potential channel for the polaritons. Such channels can guide the polariton propagation similar to one-dimensional polariton wires.
At a hadron collider as the LHC or the Tevatron the production of a photon in association with a leptonically decaying vector boson represents an important class of processes. These processes stand out due to a very clean signal of a photon and two leptons. Furthermore they
provide direct access to the photon–vector-boson couplings and thus an easy opportunity to test the
gauge sector of the Standard Model. Within the scope of this work we present a full calculation of the next-to-leading-order corrections which include the O (αs) corrections of the strong interaction as well as the electroweak corrections of O (α) including all photon-induced contributions. For the creation of matrix elements we use methods based on Feynman diagrams. The IR singularities are treated with the dipole subtraction technique. In order to separate photons from jets, a quark-to-photon fragmentation function ´a la Glover / Morgan or Frixione’s cone isolation is employed. Moreover, two different scenarios for charged leptons in the fi state were considered. The fi scenario for dressed leptons assumes that a charged lepton and a photon will be recombined if they are collinear. In the second scenario for bare muons it is assumed that leptons and photon can be separated in a detector also if they are collinear.
For our calculation we implemented all corrections into a fl Monte Carlo program. Be- sides the computation of the total cross section this program is also able to generate diff tial distributions of several experimentally motivated observables. Apart from the expected large electroweak corrections in the high transverse-momentum regions and sizeable corrections in the resonance regions of the transverse or the invariant masses we found photon-induced corrections up to several 10% for high transverse momenta. Within run I at the LHC for 7/8 TeV the experimental accuracy for Vγ production was roughly 10%. Due to the higher luminosity at run II this accuracy
will be reduced to the level of a few percent so that corrections of the same order within the theoretical predictions might become relevant. In this work we present results for the total cross section at the LHC for 7, 8 and 14 TeV and the corresponding distributions
for 14 TeV.
Due to their potential application for quantum computation, quantum dots have attracted a lot of interest in recent years. In these devices single electrons can be captured, whose spin can be used to define a quantum bit (qubit). However, the information stored in these quantum bits is fragile due to the interaction of the electron spin with its environment. While many of the resulting problems have already been solved, even on the experimental side, the hyperfine interaction between the nuclear spins of the host material and the electron spin in their center remains as one of the major obstacles. As a consequence, the reduction of the number of nuclear spins is a promising way to minimize this effect. However, most quantum dots have a fixed number of nuclear spins due to the presence of group III and V elements of the periodic table in the host material. In contrast, group IV elements such as carbon allow for a variable size of the nuclear spin environment through isotopic purification. Motivated by this possibility, we theoretically investigate the physics of the central spin model in carbon based quantum dots. In particular, we focus on the consequences of a variable number of nuclear spins on the decoherence of the electron spin in graphene quantum dots.
Since our models are, in many aspects, based upon actual experimental setups, we provide an overview of the most important achievements of spin qubits in quantum dots in the first part of this Thesis. To this end, we discuss the spin interactions in semiconductors on a rather general ground. Subsequently, we elaborate on their effect in GaAs and graphene, which can be considered as prototype materials. Moreover, we also explain how the central spin model can be described in terms of open and closed quantum systems and which theoretical tools are suited to analyze such models.
Based on these prerequisites, we then investigate the physics of the electron spin using analytical and numerical methods. We find an intriguing thermal flip of the electron spin using standard statistical physics. Subsequently, we analyze the dynamics of the electron spin under influence of a variable number of nuclear spins. The limit of a large nuclear spin environment is investigated using the Nakajima-Zwanzig quantum master equation, which reveals a decoherence of the electron spin with a power-law decay on short timescales. Interestingly, we find a dependence of the details of this decay on the orientation of an external magnetic field with respect to the graphene plane. By restricting to a small number of nuclear spins, we are able to analyze the dynamics of the electron spin by exact diagonalization, which provides us with more insight into the microscopic details of the decoherence. In particular, we find a fast initial decay of the electron spin, which asymptotically reaches a regime governed by small fluctuations around a finite long-time average value. Finally, we analytically predict upper bounds on the size of these fluctuations in the framework of quantum thermodynamics.
Entropy production in industrial economies involves heat currents, driven by gradients of temperature, and particle currents, driven by specific external forces and gradients of temperature and chemical potentials. Pollution functions are constructed for the associated emissions. They reduce the output elasticities of the production factors capital, labor, and energy in the growth equation of the capital-labor-energy-creativity model, when the emissions approach their critical limits. These are drawn by, e.g., health hazards or threats to ecological and climate stability. By definition, the limits oblige the economic actors to dedicate shares of the available production factors to emission mitigation, or to adjustments to the emission-induced changes in the biosphere. Since these shares are missing for the production of the quantity of goods and services that would be available to consumers and investors without emission mitigation, the “conventional” output of the economy shrinks. The resulting losses of conventional output are estimated for two classes of scenarios: (1) energy conservation; and (2) nuclear exit and subsidies to photovoltaics. The data of the scenarios refer to Germany in the 1980s and after 11 March 2011. For the energy-conservation scenarios, a method of computing the reduction of output elasticities by emission abatement is proposed.
A prototype detection unit of the KM3NeT deep-sea neutrino telescope has been installed at 3500m depth 80 km offshore the Italian coast. KM3NeT in its final configuration will contain several hundreds of detection units. Each detection unit is a mechanical structure anchored to the sea floor, held vertical by a submerged buoy and supporting optical modules for the detection of Cherenkov light emitted by charged secondary particles emerging from neutrino interactions. This prototype string implements three optical modules with 31 photomultiplier tubes each. These optical modules were developed by the KM3NeT Collaboration to enhance the detection capability of neutrino interactions. The prototype detection unit was operated since its deployment in May 2014 until its decommissioning in July 2015. Reconstruction of the particle trajectories from the data requires a nanosecond accuracy in the time calibration. A procedure for relative time calibration of the photomultiplier tubes contained in each optical module is described. This procedure is based on the measured coincidences produced in the sea by the 40K background light and can easily be expanded to a detector with several thousands of optical modules. The time offsets between the different optical modules are obtained using LED nanobeacons mounted inside them. A set of data corresponding to 600 h of livetime was analysed. The results show good agreement with Monte Carlo simulations of the expected optical background and the signal from atmospheric muons. An almost background-free sample of muons was selected by filtering the time correlated signals on all the three optical modules. The zenith angle of the selected muons was reconstructed with a precision of about 3∘.