520 Astronomie und zugeordnete Wissenschaften
Refine
Has Fulltext
- yes (35)
Is part of the Bibliography
- yes (35) (remove)
Year of publication
Document Type
- Doctoral Thesis (19)
- Journal article (13)
- Preprint (2)
- Master Thesis (1)
Keywords
- Aktiver galaktischer Kern (7)
- Blazar (6)
- Gammastrahlung (6)
- MAGIC-Teleskop (5)
- Astrophysik (4)
- Radioastronomie (4)
- BL-Lacertae-Objekt (3)
- Gammaastronomie (3)
- Hydrodynamik (3)
- Mathematisches Modell (3)
Institute
On-orbit verification of RL-based APC calibrations for micrometre level microwave ranging system
(2023)
Micrometre level ranging accuracy between satellites on-orbit relies on the high-precision calibration of the antenna phase center (APC), which is accomplished through properly designed calibration maneuvers batch estimation algorithms currently. However, the unmodeled perturbations of the space dynamic and sensor-induced uncertainty complicated the situation in reality; ranging accuracy especially deteriorated outside the antenna main-lobe when maneuvers performed. This paper proposes an on-orbit APC calibration method that uses a reinforcement learning (RL) process, aiming to provide the high accuracy ranging datum for onboard instruments with micrometre level. The RL process used here is an improved Temporal Difference advantage actor critic algorithm (TDAAC), which mainly focuses on two neural networks (NN) for critic and actor function. The output of the TDAAC algorithm will autonomously balance the APC calibration maneuvers amplitude and APC-observed sensitivity with an object of maximal APC estimation accuracy. The RL-based APC calibration method proposed here is fully tested in software and on-ground experiments, with an APC calibration accuracy of less than 2 mrad, and the on-orbit maneuver data from 11–12 April 2022, which achieved 1–1.5 mrad calibration accuracy after RL training. The proposed RL-based APC algorithm may extend to prove mass calibration scenes with actions feedback to attitude determination and control system (ADCS), showing flexibility of spacecraft payload applications in the future.
Lidar pose tracking of a tumbling spacecraft using the smoothed normal distribution transform
(2023)
Lidar sensors enable precise pose estimation of an uncooperative spacecraft in close range. In this context, the iterative closest point (ICP) is usually employed as a tracking method. However, when the size of the point clouds increases, the required computation time of the ICP can become a limiting factor. The normal distribution transform (NDT) is an alternative algorithm which can be more efficient than the ICP, but suffers from robustness issues. In addition, lidar sensors are also subject to motion blur effects when tracking a spacecraft tumbling with a high angular velocity, leading to a loss of precision in the relative pose estimation. This work introduces a smoothed formulation of the NDT to improve the algorithm’s robustness while maintaining its efficiency. Additionally, two strategies are investigated to mitigate the effects of motion blur. The first consists in un-distorting the point cloud, while the second is a continuous-time formulation of the NDT. Hardware-in-the-loop tests at the European Proximity Operations Simulator demonstrate the capability of the proposed methods to precisely track an uncooperative spacecraft under realistic conditions within tens of milliseconds, even when the spacecraft tumbles with a significant angular rate.
The Event Horizon Telescope (EHT) has led to the first images of a supermassive black hole, revealing the central compact objects in the elliptical galaxy M87 and the Milky Way. Proposed upgrades to this array through the next-generation EHT (ngEHT) program would sharply improve the angular resolution, dynamic range, and temporal coverage of the existing EHT observations. These improvements will uniquely enable a wealth of transformative new discoveries related to black hole science, extending from event-horizon-scale studies of strong gravity to studies of explosive transients to the cosmological growth and influence of supermassive black holes. Here, we present the key science goals for the ngEHT and their associated instrument requirements, both of which have been formulated through a multi-year international effort involving hundreds of scientists worldwide.
In the past few years, the Event Horizon Telescope (EHT) has provided the first-ever event horizon-scale images of the supermassive black holes (BHs) M87* and Sagittarius A* (Sgr A*). The next-generation EHT project is an extension of the EHT array that promises larger angular resolution and higher sensitivity to the dim, extended flux around the central ring-like structure, possibly connecting the accretion flow and the jet. The ngEHT Analysis Challenges aim to understand the science extractability from synthetic images and movies to inform the ngEHT array design and analysis algorithm development. In this work, we compare the accretion flow structure and dynamics in numerical fluid simulations that specifically target M87* and Sgr A*, and were used to construct the source models in the challenge set. We consider (1) a steady-state axisymmetric radiatively inefficient accretion flow model with a time-dependent shearing hotspot, (2) two time-dependent single fluid general relativistic magnetohydrodynamic (GRMHD) simulations from the H-AMR code, (3) a two-temperature GRMHD simulation from the BHAC code, and (4) a two-temperature radiative GRMHD simulation from the KORAL code. We find that the different models exhibit remarkably similar temporal and spatial properties, except for the electron temperature, since radiative losses substantially cool down electrons near the BH and the jet sheath, signaling the importance of radiative cooling even for slowly accreting BHs such as M87*. We restrict ourselves to standard torus accretion flows, and leave larger explorations of alternate accretion models to future work.
The next-generation Event Horizon Telescope (ngEHT) will be a significant enhancement of the Event Horizon Telescope (EHT) array, with ∼10 new antennas and instrumental upgrades of existing antennas. The increased uv-coverage, sensitivity, and frequency coverage allow a wide range of new science opportunities to be explored. The ngEHT Analysis Challenges have been launched to inform the development of the ngEHT array design, science objectives, and analysis pathways. For each challenge, synthetic EHT and ngEHT datasets are generated from theoretical source models and released to the challenge participants, who analyze the datasets using image reconstruction and other methods. The submitted analysis results are evaluated with quantitative metrics. In this work, we report on the first two ngEHT Analysis Challenges. These have focused on static and dynamical models of M87* and Sgr A* and shown that high-quality movies of the extended jet structure of M87* and near-horizon hourly timescale variability of Sgr A* can be reconstructed by the reference ngEHT array in realistic observing conditions using current analysis algorithms. We identify areas where there is still room for improvement of these algorithms and analysis strategies. Other science cases and arrays will be explored in future challenges.
The observation of electromagnetic counterparts to both high energy neutrinos and gravitational waves marked the beginning of a new era in astrophysics. The multi-messenger approach allows us to gain new insights into the most energetic events in the Universe such as gamma-ray bursts, supernovas, and black hole mergers. Real-time multi-messenger alerts are the key component of the observational strategies to unravel the transient signals expected from astrophysical sources. Focusing on the high-energy regime, we present a historical perspective of multi-messenger observations, the detectors and observational techniques used to study them, the status of the multi-messenger alerts and the most significant results, together with an overview of the future prospects in the field.
The tremendous phenomenological success of the Standard Model (SM) suggests that its flavor structure and gauge interactions may not be arbitrary but should have a fundamental first-principle explanation. In this work, we explore how the basic distinctive properties of the SM dynamically emerge from a unified New Physics framework tying together both flavor physics and Grand Unified Theory (GUT) concepts. This framework is suggested by a novel anomaly-free supersymmetric chiral E\(_6\)×SU(2)\(_F\)×U(1)\(_F\) GUT containing the SM. Among the most appealing emergent properties of this theory is the Higgs-matter unification with a highly-constrained massless chiral sector featuring two universal Yukawa couplings close to the GUT scale. At the electroweak scale, the minimal SM-like effective field theory limit of this GUT represents a specific flavored three-Higgs doublet model consistent with the observed large hierarchies in the quark mass spectra and mixing already at tree level.
The extragalactic gamma-ray sky is dominated by blazars, active galactic nuclei (AGN) with a relativistic jet that is closely aligned with the line of sight. Galaxies develop an active nucleus if the central supermassive black hole (BH) accretes large amounts of ambient matter and magnetic flux. The inflowing mass accumulates around the plane perpendicular to the accretion flow's angular momentum. The flow is heated through viscous friction and part of the released energy is radiated as blackbody or non-thermal radiation, with luminosities that can dominate the accumulated stellar luminosity of the host galaxy. A fraction of the accretion flow luminosity is reprocessed in a surrounding field of ionised gas clouds. These clouds, revolving around the central BH, emit Doppler-broadened atomic emission lines. The region where these broad-line-emitting clouds are located is called broad-line region (BLR).
About one in ten AGN forms an outflow of radiation and relativistic particles, called a relativistic jet. According to the Blandford-Znajek mechanism, this is facilitated through electromagnetic processes in the magnetosphere of a spinning BH. The latter induces a magnetospheric poloidal current circuit, generating a decelerating torque on the BH and inducing a toroidal magnetic field. Consequently, rotational energy of the BH is converted to Poynting flux streaming away mainly along the rotational axis and starting the jet. One possibility for particle acceleration near the jet base is realised by magnetospheric vacuum gaps, regions temporarily devoid of plasma, such that an intermittent electric field arises parallel to the magnetic field lines, enabling particle acceleration and contributing to the mass loading of the jets.
Magnetised structures, containing bunches of relativistic electrons, propagate away from the galactic nucleus along the jets. Assuming that these electrons emit synchrotron radiation and that they inverse-Compton (IC) up-scatter abundant target photons, which can either be the synchrotron photons themselves or photons from external emitters, the emitted spectrum can be theoretically determined. Additionally taking into account that these emission regions move relativistically themselves and that the emission is Doppler-boosted and beamed in forward direction, the typical two-hump spectral energy distribution (SED) of blazars is recovered.
There are however findings that challenge this well-established model. Short-time variability, reaching down to minute scales at very high energy gamma rays, is today known to be a widespread phenomenon of blazars, calling for very compact emission regions. In most models of such optically thick emission regions, the gamma-ray flux is usually pair-absorbed exponentially, without considering the cascade evolving from the pair-produced electrons. From the observed flux, it is often concluded that emission emanates from larger distances where the region is optically thin, especially from outside of the BLR. Only in few blazars gamma-ray attenuation associated with pair absorption in the BLR was clearly reported.
With the advent of sophisticated high-energy or very high energy gamma-ray detectors, like the Fermi Large Area Telescope or the Major Atmospheric Gamma-ray Imaging Cherenkov telescopes, besides the extraordinarily fast variability spectral features have been found that cannot be explained by conventional models reproducing the two-hump SED. Two such narrow spectral features are discussed in this work. For the nearby blazar Markarian 501, hints to a sharp peak around 3 TeV have been reported from a multi-wavelength campaign carried out in July 2014, while for 3C 279 a spectral dip was found in 2018 data, that can hardly be described with conventional fitting functions. In this work it is examined whether these spectral peculiarities of blazar jet emission can be explained, if the full radiation reprocessing through an IC pair cascade is accounted for.
Such a cascade is the multiple concatenation of IC scattering events and pair production events. In the cascades generally considered in this work, relativistic electrons and high-energy photons are injected into a fixed soft target photon field. A mathematical description for linear IC pair cascades with escape terms is delivered on the basis of preliminary works. The steady-state kinetic equations for the electrons and for the photons are determined, whereby it is paid attention to an explicit formulation and to motivating the correct integration borders of all integrals from kinematic constraints. In determining the potentially observable gamma-ray flux, both the attenuated injected flux and the flux evolving as an effect of IC up-scattering, pair absorption and escape are incorporated, giving the emerging spectra very distinct imprints.
Much effort is dedicated to the numerical solution of the electrons' kinetic equation via iterative schemes. It is explained why pointwise iteration from higher to lower Lorentz factors is more efficient than iterating the whole set of sampling points. The algorithm is parallelised at two positions. First, several workers can perform pointwise iterations simultaneously. Second, the most demanding integral is cut into a number of part integrals which can be determined by multiple workers. Through these measures, the Python code can be readily applied to simulate steady-state IC pair cascades with escape.
In the case of Markarian 501 the developed framework is as follows. The AGN hosts an advection-dominated accretion flow with a normalised accretion rate of several \(10^{-4}\) and an electron temperature near \(10^{10}\) K. On the one hand, the accretion flow illuminates the few ambient gas clouds with approximate radius \(10^{11}\) m, which reprocess a fraction 0.01 of the luminosity into hydrogen and helium emission lines. On the other hand, the gamma rays from the accretion flow create electrons and positrons in a sporadically active vacuum gap in the BH magnetosphere. In the active gap, a power of roughly 0.001 of the Blandford-Znajek power is extracted from the rotating BH through a gap potential drop of several \(10^{18}\) V, generating ultra-relativistic electrons, which subsequently are multiplied by a factor of about \(10^6\) through interaction with the accretion flow photons. This electron beam propagates away from the central engine and encounters the photon field of one passing ionised cloud. The resulting IC pair cascade is simulated and the evolving gamma-ray spectrum is determined. Just above the absorption troughs due to the hydrogen lines, the spectrum exhibits a narrow bump around 3 TeV. When the cascaded emission is added to the emission generated at larger distances, the observed multi-wavelength SED including the sharp peak at 3 TeV is reproduced, underlining that radiation processes beyond conventional models are motivated by distinct spectral features.
The dip in the spectrum of 3C 279 is addressed by a similar cascade model. Three types of injection are considered, varying in the ratio of the photon density to the electron density and varying in the spectral shape. The IC pair cascade is assumed to happen either in the dense BLR photon field with a luminosity of several \(10^{37}\) W and a radial size of few \(10^{14}\) m or in the diluted photon field outside of the BLR. The latter scenario is however rejected as the spectral slope around several 100 MeV and the dip at few 10 GeV cannot be reconciled within this model. The radiation cascaded in the BLR can explain the observational data, irrespective of the assumed injected rate. It is therefore concluded that for this period of gamma-ray emission, the radiation production happens at the edge of the BLR of 3C 279.
Both investigations show that IC pair cascades can account for fine structure seen in blazar SEDs. It is insufficient to restrict the radiation transport to pure exponential absorption of an injection term. Pair production and IC up-scattering by all generations of photons and electrons in the optically thick regime critically shape the emerging spectra. As the advent of future improved detectors will provide more high-precision spectra, further observations of narrow spectral features can be expected. It seems therefore recommendable to incorporate cascading into conventional radiation production models or to extend the model developed in this work by synchrotron radiation.
Hard X-ray Properties of Relativistically Beamed Jets from Radio- and Gamma-Ray-Bright Blazars
(2022)
In this work I characterize the hard X-ray properties of blazars, active galactic nuclei with highly beamed emission, which are notoriously hard to detect in this energy range. I employ pre-defined samples of beamed AGN: the radio-selected MOJAVE and TANAMI samples, as well as the most recent gamma-ray-selected Fermi/LAT 4LAC catalog. The hard X-ray data is extracted from the 105-month all-sky survey maps of the Swift/BAT (Burst Alert Telescope) in the energy band of 20 keV to 100 keV. A great majority of both the MOJAVE and TANAMI samples are significantly detected, with signal-to noise ratios of the sources often just below the X-ray catalog signal thresholds. All blazar sub-types (FSRQs, BL Lacs) and radio galaxies show characteristic ranges of X-ray flux, luminosity, and photon index. Their properties are correlated with the corresponding SED's shape / peak frequency. The LogN-LogS distributions of the samples show a scarcity of blazars in the middle and lower X-ray flux range, indicating differing evolutionary paths between radio and X-ray emission, which is also suggested by the corresponding luminosity functions. Compared to the radio samples, the 4LAC sources are on average significantly less bright in the BAT band since this range often coincides with the spectral gap region between the two big SED emission bumps. Also, the spectral shapes differ notably, especially for the sub-type of BL Lacs. Using the parameter space of X-ray and gamma-ray photon indices, 35 blazar candidate sources can be assigned to either the FSRQ or BL Lac type with high certainty. The reason why many blazars are weak in this energy band can be traced back to a number of factors: the selection bias of the initial sample, differential evolution of the X-rays and the wavelengths in which the sample is defined, and the limited sensitivity of the observing instruments.
We prove a sharp Bernstein-type inequality for complex polynomials which are positive and satisfy a polynomial growth condition on the positive real axis. This leads to an improved upper estimate in the recent work of Culiuc and Treil (Int. Math. Res. Not. 2019: 3301–3312, 2019) on the weighted martingale Carleson embedding theorem with matrix weights. In the scalar case this new upper bound is optimal.
In this work, we consider impulsive dynamical systems evolving on an infinite-dimensional space and subjected to external perturbations. We look for stability conditions that guarantee the input-to-state stability for such systems. Our new dwell-time conditions allow the situation, where both continuous and discrete dynamics can be unstable simultaneously. Lyapunov like methods are developed for this purpose. Illustrative finite and infinite dimensional examples are provided to demonstrate the application of the main results. These examples cannot be treated by any other published approach and demonstrate the effectiveness of our results.
For the understanding of the variable, transient and non-thermal universe, unbiased long-term monitoring is crucial. To constrain the emission mechanisms at the highest energies, it is important to characterize the very high energy emission and its correlation with observations at other wavelengths. At very high energies, only a limited number of instruments is available. This article reviews the current status of monitoring of the extra-galactic sky at TeV energies.
In this view point we do not change cosmology after the hot fireball starts (hence agrees well with observation), but the changed start suggested and resulting later implications lead to an even better fit with current observations (voids, supercluster and galaxy formation; matter and no antimatter) than the standard model with big bang and inflation: In an eternal ocean of qubits, a cluster of qubits crystallizes to defined bits. The universe does not jump into existence (“big bang”) but rather you have an eternal ocean of qubits in free super-position of all their quantum states (of any dimension, force field and particle type) as permanent basis. The undefined, boiling vacuum is the real “outside”, once you leave our everyday universe. A set of n Qubits in the ocean are “liquid”, in very undefined state, they have all their m possibilities for quantum states in free superposition. However, under certain conditions the qubits interact, become defined, and freeze out, crystals form and give rise to a defined, real world with all possible time series and world lines. GR holds only within the crystal. In our universe all n**m quantum possibilities are nicely separated and crystallized out to defined bit states: A toy example with 6 qubits each having 2 states illustrates, this is completely sufficient to encode space using 3 bits for x,y and z, 1 bit for particle type and 2 bits for its state. Just by crystallization, space, particles and their properties emerge from the ocean of qubits, and following the arrow of entropy, time emerges, following an arrow of time and expansion from one corner of the toy universe to everywhere else. This perspective provides time as emergent feature considering entropy: crystallization of each world line leads to defined world lines over their whole existence, while entropy ensures direction of time and higher representation of high entropy states considering the whole crystal and all slices of world lines. The crystal perspective is also economic compared to the Everett-type multiverse, each qubit has its m quantum states and n qubits interacting forming a crystal and hence turning into defined bit states has only n**m states and not more states. There is no Everett-type world splitting with every decision but rather individual world trajectories reside in individual world layers of the crystal. Finally, bit-separated crystals come and go in the qubit ocean, selecting for the ability to lay seeds for new crystals. This self-organizing reproduction selects over generations also for life-friendliness. Mathematical treatment introduces quantum action theory as a framework for a general lattice field theory extending quantum chromo dynamics where scalar fields for color interaction and gravity have to be derived from the permeating qubit-interaction field. Vacuum energy should get appropriately low by the binding properties of the qubit crystal. Connections to loop quantum gravity, string theory and emergent gravity are discussed. Standard physics (quantum computing; crystallization, solid state physics) allow validation tests of this perspective and will extend current results.
Im Rahmen eines selbst-konsistenten Outer-Gap-Modells der Pulsar-Magnetosphäre wurde die elektromagnetische sehr hochenergetische Strahlung des Crab-Pulsars simuliert. Dies wurde parallel anhand zweier verschiedener Fälle getan, die sich in den angenommenen Gleichungen für die elektrische Feldstärke und für den Krümmungsradius der magnetischen Feldlinien unterscheiden. Die Kinetik der geladenen Teilchen bei ihrer Propagation durch die Outer Gap wurde unter Einbeziehung von Krümmungsstrahlung, inverser Compton-Streuung und Triple Paarbildung betrachtet. Das theoretisch simulierte Spektrum wird mit von Fermi-LAT und von den MAGIC Teleskopen gemessenen Daten verglichen.
Our universe may have started by Qubit decoherence:
In quantum computers, qubits have all their states undefined during calculation and become defined as output (“decoherence”). We study the transition from an uncontrolled, chaotic quantum vacuum (“before”) to a clearly interacting “real world”. In such a cosmology, the Big Bang singularity is replaced by a condensation event of interacting strings. This triggers a crystallization process. This avoids inflation, not fitting current observations: increasing long-range interactions limit growth and crystal symmetries ensure the same laws of nature and basic symmetries over the whole crystal. Tiny mis-arrangements provide nuclei of superclusters and galaxies and crystal structure allows arrangement of dark (halo regions) and normal matter (galaxy nuclei) for galaxy formation. Crystals come and go: an evolutionary cosmology is explored: entropic forces from the quantum soup “outside” of the crystal try to dissolve it. This corresponds to dark energy and leads to a “big rip” in 70 Gigayears. Selection for best growth and condensation events over generations of crystals favors multiple self-organizing processes within the crystal including life or even conscious observers in our universe. Philosophically this theory shows harmony with nature and replaces absurd perspectives of current cosmology.
Independent of cosmology, we suggest that a “real world” (so our everyday macroscopic world) happens only inside a crystal. “Outside” there is wild quantum foam and superposition of all possibilities. In our crystallized world the vacuum no longer boils but is cooled down by the crystallization event, space-time exists and general relativity holds. Vacuum energy becomes 10**20 smaller, exactly as observed in our everyday world. We live in a “solid” state, within a crystal, the n quanta which build our world have all their different m states nicely separated. There are only nm states available for this local “multiverse”. The arrow of entropy for each edge of the crystal forms one fate, one world-line or clear development of our world, while layers of the crystal are different system states. Mathematical leads from loop quantum gravity (LQG) point to required interactions and potentials. Interaction potentials for strings or loop quanta of any dimension allow a solid, decoherent state of quanta challenging to calculate. However, if we introduce here the heuristic that any type of physical interaction of strings corresponds just to a type of calculation, there is already since 1898 the Hurwitz theorem showing that then only 1D, 2D, 4D and 8D (octonions) allow complex or hypercomplex number calculations. No other hypercomplex numbers and hence dimensions or symmetries are possible to allow calculations without yielding divisions by zero. However, the richest solution allowed by the Hurwitz theorem, octonions, is actually the observed symmetry of our universe, E8. Standard physics such as condensation, crystallization and magnetization but also solid-state physics and quantum computing allow us to show an initial mathematical treatment of our new theory by LQG to describe the cosmological state transformations by equations, and, most importantly, point out routes to parametrization of free parameters looking at testable phenomena, experiments and formulas that describe processes of crystallization, protein folding, magnetization, solid-state physics and quantum computing. This is presented here for LQG, for string theory it would be more elegant but was too demanding to be shown here.
Note: While my previous Opus server preprint “A new cosmology of a crystallization process (decoherence) from the surrounding quantum soup provides heuristics to unify general relativity and quantum physics by solid state physics” (https://doi.org/10.25972/OPUS-23076) deals with the same topics and basic formulas, this new version is improved: clearer in title, better introduction, more stringent in its mathematics and improved discussion of the implications including quantum computing, hints for parametrization and connections to LQG and other current cosmological efforts.
This 5th of June 2021 version is again an OPUS preprint, but this will next be edited for Archives https://arxiv.org.
Flux distribution is an important tool to understand the variability processes in activegalactic nuclei. We now have available a great deal of observational evidences pointing towards thepresence of log-normal components in the high energy light curves, and different models have beenproposed to explain these data. Here, we collect some of the recent developments on this topic usingthe well-known blazar Mrk 501 as example of complex and interesting aspects coming from its fluxdistribution in different energy ranges and at different timescales. The observational data we refer toare those collected in a complementary manner by Fermi-LAT over multiple years, and by the FirstG-APD Cherenkov Telescope (FACT) telescope and the H.E.S.S. array in correspondence of the brightflare of June 2014
The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and
Interplanetary shocks are believed to play an important role in the acceleration of charged particles in the heliosphere. While the acceleration to high energies proceeds via the diffusive mechanism at the scales exceeding by far the shock width, the initial stage (injection) should occur at the shock itself. Numerical tracing of ions is done in a model quasi-perpendicular shock front with a typical interplanetary shock parameters (Mach number, upstream ion temperature). The analysis of the distribution of the transmitted solar wind is used to adjust the cross-shock potential which is not directly measured. It is found that, for typical upstream ion temperatures, acceleration of the ions from the tail of the solar wind distribution is unlikely. Pickup ions with a shell distribution are found to be effectively energized and may be injected into further diffusive acceleration regime. Pre-accelerated ions are efficiently upscaled in energies. A part of these ions is returned to the upstream region where they can further be diffusively accelerated.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
The most energetic versions of active galactic nuclei (AGNs) feature two highly-relativistic plasma outflows, so-called jets, that are created in the vicinity of the central supermassive black hole and evolve in opposite directions. In blazars, which dominate the extragalactic gamma-ray sky, the jets are aligned close to the observer's line of sight leading to strong relativistic beaming effects of the jet emission. Radio observations especially using very long baseline interferometry (VLBI) provide the best way to gain direct information on the intrinsic properties of jets down to sub-parsec scales, close to their formation region.
In this thesis, I focus on the properties of three AGNs, IC 310, PKS 2004-447, and 3C 111 that belong to the small non-blazar population of gamma-ray-loud AGNs. In these kinds of AGNs, the jets are less strongly aligned with respect to the observer than in blazars. I study them in detail with a variety of radio astronomical instruments with respect to their high-energy emission and in the context of the large samples in the monitoring programmes MOJAVE and TANAMI. My analysis of radio interferometric observations and flux density monitoring data reveal very different characteristics of the jet emission in these sources. The work presented in this thesis illustrates the diversity of the radio properties of gamma-ray-loud AGNs that do not belong to the dominating class of blazars.