520 Astronomie und zugeordnete Wissenschaften
Refine
Has Fulltext
- yes (31)
Is part of the Bibliography
- yes (31)
Year of publication
Document Type
- Doctoral Thesis (16)
- Journal article (13)
- Preprint (2)
Language
- English (31) (remove)
Keywords
- Aktiver galaktischer Kern (6)
- Gammastrahlung (6)
- Blazar (5)
- MAGIC-Teleskop (5)
- Astrophysik (4)
- Radioastronomie (4)
- BL-Lacertae-Objekt (3)
- Gammaastronomie (3)
- Hydrodynamik (3)
- active galactic nuclei (3)
Institute
On-orbit verification of RL-based APC calibrations for micrometre level microwave ranging system
(2023)
Micrometre level ranging accuracy between satellites on-orbit relies on the high-precision calibration of the antenna phase center (APC), which is accomplished through properly designed calibration maneuvers batch estimation algorithms currently. However, the unmodeled perturbations of the space dynamic and sensor-induced uncertainty complicated the situation in reality; ranging accuracy especially deteriorated outside the antenna main-lobe when maneuvers performed. This paper proposes an on-orbit APC calibration method that uses a reinforcement learning (RL) process, aiming to provide the high accuracy ranging datum for onboard instruments with micrometre level. The RL process used here is an improved Temporal Difference advantage actor critic algorithm (TDAAC), which mainly focuses on two neural networks (NN) for critic and actor function. The output of the TDAAC algorithm will autonomously balance the APC calibration maneuvers amplitude and APC-observed sensitivity with an object of maximal APC estimation accuracy. The RL-based APC calibration method proposed here is fully tested in software and on-ground experiments, with an APC calibration accuracy of less than 2 mrad, and the on-orbit maneuver data from 11–12 April 2022, which achieved 1–1.5 mrad calibration accuracy after RL training. The proposed RL-based APC algorithm may extend to prove mass calibration scenes with actions feedback to attitude determination and control system (ADCS), showing flexibility of spacecraft payload applications in the future.
This work is concerned with the numerical approximation of solutions to models that are used to describe atmospheric or oceanographic flows. In particular, this work concen- trates on the approximation of the Shallow Water equations with bottom topography and the compressible Euler equations with a gravitational potential. Numerous methods have been developed to approximate solutions of these models. Of specific interest here are the approximations of near equilibrium solutions and, in the case of the Euler equations, the low Mach number flow regime. It is inherent in most of the numerical methods that the quality of the approximation increases with the number of degrees of freedom that are used. Therefore, these schemes are often run in parallel on big computers to achieve the best pos- sible approximation. However, even on those big machines, the desired accuracy can not be achieved by the given maximal number of degrees of freedom that these machines allow. The main focus in this work therefore lies in the development of numerical schemes that give better resolution of the resulting dynamics on the same number of degrees of freedom, compared to classical schemes.
This work is the result of a cooperation of Prof. Klingenberg of the Institute of Mathe- matics in Wu¨rzburg and Prof. R¨opke of the Astrophysical Institute in Wu¨rzburg. The aim of this collaboration is the development of methods to compute stellar atmospheres. Two main challenges are tackled in this work. First, the accurate treatment of source terms in the numerical scheme. This leads to the so called well-balanced schemes. They allow for an accurate approximation of near equilibrium dynamics. The second challenge is the approx- imation of flows in the low Mach number regime. It is known that the compressible Euler equations tend towards the incompressible Euler equations when the Mach number tends to zero. Classical schemes often show excessive diffusion in that flow regime. The here devel- oped scheme falls into the category of an asymptotic preserving scheme, i.e. the numerical scheme reflects the behavior that is computed on the continuous equations. Moreover, it is shown that the diffusion of the numerical scheme is independent of the Mach number.
In chapter 3, an HLL-type approximate Riemann solver is adapted for simulations of the Shallow Water equations with bottom topography to develop a well-balanced scheme. In the literature, most schemes only tackle the equilibria when the fluid is at rest, the so called Lake at rest solutions. Here a scheme is developed to accurately capture all the equilibria of the Shallow Water equations. Moreover, in contrast to other works, a second order extension is proposed, that does not rely on an iterative scheme inside the reconstruction procedure, leading to a more efficient scheme.
In chapter 4, a Suliciu relaxation scheme is adapted for the resolution of hydrostatic equilibria of the Euler equations with a gravitational potential. The hydrostatic relations are underdetermined and therefore the solutions to that equations are not unique. However, the scheme is shown to be well-balanced for a wide class of hydrostatic equilibria. For specific classes, some quadrature rules are computed to ensure the exact well-balanced property. Moreover, the scheme is shown to be robust, i.e. it preserves the positivity of mass and energy, and stable with respect to the entropy. Numerical results are presented in order to investigate the impact of the different quadrature rules on the well-balanced property.
In chapter 5, a Suliciu relaxation scheme is adapted for the simulations of low Mach number flows. The scheme is shown to be asymptotic preserving and not suffering from excessive diffusion in the low Mach number regime. Moreover, it is shown to be robust under certain parameter combinations and to be stable from an Chapman-Enskog analysis.
Numerical results are presented in order to show the advantages of the new approach.
In chapter 6, the schemes developed in the chapters 4 and 5 are combined in order to investigate the performance of the numerical scheme in the low Mach number regime in a gravitational stratified atmosphere. The scheme is shown the be well-balanced, robust and stable with respect to a Chapman-Enskog analysis. Numerical tests are presented to show the advantage of the newly proposed method over the classical scheme.
In chapter 7, some remarks on an alternative way to tackle multidimensional simulations are presented. However no numerical simulations are performed and it is shown why further research on the suggested approach is necessary.
Blazars are among the most luminous sources in the universe. Their extreme short-time variability indicates emission processes powered by a supermassive black hole. With the current generation of Imaging Air Cherenkov Telescopes, these sources are explored at very high energies. Lowering the threshold below 100 GeV and improving the sensitivity of the telescopes, more and more blazars are discovered in this energy regime. For the MAGIC telescope, a low energy analysis has been developed allowing to reach energies of 50 GeV for the first time. The method is presented in this thesis at the example of PG 1553+113 measuring a spectrum between 50 GeV and 900 GeV. In the energy regime observed by MAGIC, strong attenuation of the gamma-rays is expected from pair production due to interactions of gamma-rays with low-energy photons from the extragalactic background light. For PG 1553+113, this provides the possibility to constrain the redshift of the source, which is still unknown. Well studied from radio to x-ray energies, PG 1553+113 was discovered in 2005 in the very high energy regime. In total, it was observed with the MAGIC telescope for 80~hours between April 2005 and April 2007. From more than three years of data taking, the MAGIC telescope provides huge amounts of data and a large number of files from various sources. To handle this data volume and to provide monitoring of the data quality, an automatic procedure is essential. Therefore, a concept for automatic data processing and management has been developed. Thanks to its flexibility, the concept is easily applicable to future projects. The implementation of an automatic analysis is running stable since three years in the data center in Würzburg and provides consistent results of all MAGIC data, i.e. equal processing ensures comparability. In addition, this database controlled system allows for easy tests of new analysis methods and re-processing of all data with a new software version at the push of a button. At any stage, not only the availability of the data and its processing status is known, but also a large set of quality parameters and results can be queried from the database, facilitating quality checks, data selection and continuous monitoring of the telescope performance. By using the automatic analysis, the whole data sample can be analyzed in a reasonable amount of time, and the analyzers can concentrate on interpreting the results instead. For PG 1553+113, the tools and results of the automatic analysis were used. Compared to the previously published results, the software includes improvements as absolute pointing correction, absolute light calibration and improved quality and background-suppression cuts. In addition, newly developed analysis methods taking into account timing information were used. Based on the automatically produced results, the presented analysis was enhanced using a special low energy analysis. Part of the data were affected by absorption due to the Saharan Air Layer, i.e. sanddust in the atmosphere. Therefore, a new method has been developed, correcting for the effect of this meteorological phenomenon. Applying the method, the affected data could be corrected for apparent flux variations and effects of absorption on the spectrum, allowing to use the result for further studies. This is especially interesting, as these data were taken during a multi-wavelength campaign. For the whole data sample of 54 hours after quality checks, a signal from the position of PG 1553+113 was found with a significance of 15 standard deviations. Fitting a power law to the combined spectrum between 75 GeV and 900 GeV, yields a spectral slope of 4.1 +/- 0.2. Due to the low energy analysis, the spectrum could be extended to below 50 GeV. Fitting down to 48 GeV, the flux remains the same, but the slope changes to 3.7 +/- 0.1. The determined daily light curve shows that the integral flux above 150 GeV is consistent with a constant flux. Also for the spectral shape no significant variability was found in three years of observations. In July 2006, a multi-wavelength campaign was performed. Simultaneous data from the x-ray satellite Suzaku, the optical telescope KVA and the two Cherenkov experiments MAGIC and H.E.S.S. are available. Suzaku measured for the first time a spectrum up to 30 keV. The source was found to be at an intermediate flux level compared to previous x-ray measurements, and no short time variability was found in the continuous data sample of 41.1 ksec. Also in the gamma regime, no variability was found during the campaign. Assuming a maximum slope of 1.5 for the intrinsic spectrum, an upper limit of z < 0.74 was determined by deabsorbing the measured spectrum for the attenuation of photons by the extragalactic background light. For further studies, a redshift of z = 0.3 was assumed. Collecting various data from radio, infrared, optical, ultraviolet, x-ray and gama-ray energies, a spectral energy distribution was determined, including the simultaneous data of the multi-wavelength campaign. Fitting the simultaneous data with different synchrotron-self-compton models shows that the observed spectral shape can be explained with synchrotron-self-compton processes. The best result was obtained with a model assuming a log-parabolic electron distribution.
At the beginning of regular observations with the MAGIC telescope in December 2004, all but one extragalactic sources detected at very high energy (VHE) gamma-rays belonged to the class of high frequency peaked BL Lac (HBL) objects. This motivated a systematic scan of candidate sources to increase the number of known sources and to study systematically their spectral properties. As candidate sources for VHE emission, X-ray bright HBLs were selected from a compilation of active galactic nuclei. The MAGIC observations took place from December 2004 to March 2006. The declination of the objects was restricted to values between -1.2° and +58.8° corresponding to a maximum zenith distance lower than 30° at culmination. Since gamma-rays are absorbed by photo-pair production in low energy background radiation fields, the redshift of the investigated objects was limitetd to z < 0.3. Under the assumption that HBLs generally emit the same energy flux at 1keV as at 200GeV, only the brightest X-ray sources were observed, leading to a cut in the X-ray flux of F(1keV) > 2µJy}. Of the fourteen sources observed, four have been detected: 1ES 1218+304 (for the first time at very high energies), 1ES 2344+514 (strong detection in a state of low activity), Mrk 421 and Mrk 501. A hint of a signal on a 3-sigma-level from the direction of 1ES 1011+496 has been observed. In the meantime the object has been confirmed as a source of VHE gamma-rays by a second MAGIC observation campaign triggered by an optical outburst. For ten sources, upper limits on their integral fluxes above 200GeV have been calculated on a 99% confidence level. To cross calibrate the different data samples, collected during 14 months, bright muon ring images have been used, recorded as background events by the MAGIC telescope. Based on the development by Meyer (2003), the method has been improved and implemented into the automatic data analysis as a continuous monitor of the calibration and the point spread function of the optical system. While the ring images are generated by muons with small impact parameters, it could be shown that the image parameter distributions for muons with large impact parameters and gamma showers completely overlap, revealing these muons as the dominant background for gamma-ray observations below energies of 150GeV. The sample of HBLs (including all HBLs detected at VHE so far) has been investigated for correlations between broad-band spectral indices as determined from simultaneous optical, archival X-ray and radio luminosities, finding that the VHE emitting HBLs do not differ from the non-detected ones. In general the absorption corrected HBL gamma-ray luminosities at 200GeV are not higher than their X-ray luminosities at 1keV. Based on a complete X-ray BL Lac sample, the Hamburg/ROSAT X-ray BL Lac sample, the number of expected VHE sources has been estimated for the performed scan, finding a consistent number under the assumption of a 37% completeness of the investigated sample and a 1keV-to-200GeV luminosity ratio of 1.4. An upper limit on the omnidirectional flux at 200GeV has been calculated by interpolating the sum over the observed fluxes and upper limits. Within the uncertainties, the result is in agreement with the expectations derived from the X-ray luminosity function of BL Lacs. For 1ES 1218+304 and 1ES 2344+514 the lightcurves have been derived, showing evidence for flux variability on a time scale of 17 days and 24h, respectively. In the case of 1ES 1218+304 variability has been reported for the first time at VHEs. For both sources the energy spectra have been reconstructed and discussed in the context of their broad band spectral energy distribution (SED), using a single zone synchrotron self Compton model. The SEDs are well fitted by the simulation even though the very high peak frequencies at gamma-rays push the model to its limits. The parameters derived from the simulation are in good agreement with the parameters found for similar HBLs.
Multi-Wavelength Observations of the high-peaked BL Lacertae objects 1ES 1011+496 and 1ES 2344+514
(2012)
BL Lacertae objects belong to the most luminous sources in the Universe. They represent a subclass of active galactic nuclei with a spectrum that is dominated by non-thermal emission, extending from radio wavelengths to tera electronvolt (TeV) energies. The emission is strongly variable on time scales of years down to minutes, and arises from relativistic jets pointing at small angles to the line of sight of the observer, which is the reason for naming them “blazars”. Blazars are the dominant extragalactic source class in the radio, microwave and gamma-ray regime, are prime candidates for the origin of the Cosmic Rays and excellent laboratories to study black hole and jet physics as well as relativistic effects. Despite more than 20 years of observational efforts, the physical mechanisms driving their emission are not yet fully understood. So far, studies of their broad-band continuum emission were mostly concentrated on bright, flaring states. However, for a better understanding of the central engine powering the jets, the bias from flux-limited observations of the past must be overcome and their long-term average continuum spectral energy distributions (SEDs) must be determined. This work presents the first simultaneous multi-wavelength campaigns from the radio to the TeV regime of two high-frequency peaked BL Lacertae objects known to emit at TeV energies. The first source, 1ES 1011+496, was observed between February and May 2008, the second one, 1ES 2344+514, between September 2008 and February 2009. The extensive observational campaigns were organised independently from an external trigger for the presence of a flaring state. Since the duty cycle of major flux outbursts is known to be rather low, the campaigns were expected to yield SEDs representative of the long-term average emission. Central for this thesis is the analysis of data obtained with the MAGIC Cherenkov telescope, measuring energy spectra and light curves from ~0.1 to ~10 TeV. For the remaining instruments, observation time was proposed and additional data was organised by collaboration with the instrument teams by the author of this work. Such data was obtained mostly in a fully reduced state. Individual light curves are investigated as well as combined in a search for inter-band correlations. The data of both sources reveal a notable lack of a correlation between the emission at radio and optical wavelengths, indicating that the radio and short-wavelength emission arise in different regions of the jet. Quasi-simultaneous SEDs of two different flux states are observationally determined and described by a one-zone as well as a self-consistent two-zone synchrotron self-Compton model. First approaches to model the SEDs by means of a Chi2 minimisation technique are briefly discussed. The SEDs and the resulting model parameters, characterising the physical conditions in the emission regions, are compared to archival data. Though the models can describe the data well, for 1ES 1011+496 the model parameters indicate that in addition to the synchrotron and inverse-Compton emission of relativistic electrons, emission due to accelerated protons seems to be required. The SEDs of 1ES 2344+514 reveal one of the lowest activity states ever detected from the source. Despite that, the model parameters are not indicative of a distinct quiescent state, which may be caused by the degeneracy of the different parameters in one-zone models. Moreover, indications accumulate that the radiation can not be attributed to a single emission region. The results disfavour some of the current blazar classification schemes and the so-called “blazar sequence”, emphasising the need for a more realistic explanation of the systematics of the blazar SEDs in terms of fundamental parameters.
Lidar pose tracking of a tumbling spacecraft using the smoothed normal distribution transform
(2023)
Lidar sensors enable precise pose estimation of an uncooperative spacecraft in close range. In this context, the iterative closest point (ICP) is usually employed as a tracking method. However, when the size of the point clouds increases, the required computation time of the ICP can become a limiting factor. The normal distribution transform (NDT) is an alternative algorithm which can be more efficient than the ICP, but suffers from robustness issues. In addition, lidar sensors are also subject to motion blur effects when tracking a spacecraft tumbling with a high angular velocity, leading to a loss of precision in the relative pose estimation. This work introduces a smoothed formulation of the NDT to improve the algorithm’s robustness while maintaining its efficiency. Additionally, two strategies are investigated to mitigate the effects of motion blur. The first consists in un-distorting the point cloud, while the second is a continuous-time formulation of the NDT. Hardware-in-the-loop tests at the European Proximity Operations Simulator demonstrate the capability of the proposed methods to precisely track an uncooperative spacecraft under realistic conditions within tens of milliseconds, even when the spacecraft tumbles with a significant angular rate.
The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and
The Event Horizon Telescope (EHT) has led to the first images of a supermassive black hole, revealing the central compact objects in the elliptical galaxy M87 and the Milky Way. Proposed upgrades to this array through the next-generation EHT (ngEHT) program would sharply improve the angular resolution, dynamic range, and temporal coverage of the existing EHT observations. These improvements will uniquely enable a wealth of transformative new discoveries related to black hole science, extending from event-horizon-scale studies of strong gravity to studies of explosive transients to the cosmological growth and influence of supermassive black holes. Here, we present the key science goals for the ngEHT and their associated instrument requirements, both of which have been formulated through a multi-year international effort involving hundreds of scientists worldwide.
Interplanetary shocks are believed to play an important role in the acceleration of charged particles in the heliosphere. While the acceleration to high energies proceeds via the diffusive mechanism at the scales exceeding by far the shock width, the initial stage (injection) should occur at the shock itself. Numerical tracing of ions is done in a model quasi-perpendicular shock front with a typical interplanetary shock parameters (Mach number, upstream ion temperature). The analysis of the distribution of the transmitted solar wind is used to adjust the cross-shock potential which is not directly measured. It is found that, for typical upstream ion temperatures, acceleration of the ions from the tail of the solar wind distribution is unlikely. Pickup ions with a shell distribution are found to be effectively energized and may be injected into further diffusive acceleration regime. Pre-accelerated ions are efficiently upscaled in energies. A part of these ions is returned to the upstream region where they can further be diffusively accelerated.
Indirect Search for Dark Matter in the Universe - the Multiwavelength and Multiobject Approach
(2011)
Cold dark matter constitutes a basic tenet of modern cosmology, essential for our understanding of structure formation in the Universe. Since its first discovery by means of spectroscopic observations of the dynamics of the Coma cluster some 80 years ago, mounting evidence of its gravitational pull and its impact on the geometry of space-time has build up across a wide range of scales, from galaxies to the entire Hubble flow. The apparent lack of electromagnetic coupling and independent measurements of the energy density of baryonic matter from the primordial abundances of light elements show the non-baryonic nature of dark matter, and its clustering properties prove that it is cold, i.e. that it has a temperature lower than its mass during the time of radiation-matter equality. A generic particle candidate for cold dark matter are weakly interacting massive particles at the electroweak symmetry-breaking scale, such as the neutralinos in R-parity conserving supersymmetry. Such particles would naturally freeze-out with a cosmologically relevant relic density at early times in the expanding Universe. Subsequent clustering of matter would recover annihilation interactions between the dark matter particles to some extent and thus lead to potentially observable high-energy emission from the decaying unstable secondaries produced in annihilation events. The spectra of the secondaries would permit a determination of the mass and annihilation cross section, which are crucial for the microphysical identification of the dark matter. This the central motivation for indirect dark matter searches. However, presently neither the indirect searches, nor the complementary direct searches based on the detection of elastic scattering events, nor the production of candidate particles in collider experiments, has yet provided unequivocal evidence for dark matter. This does not come as a surprise, since the dark matter particles interact only through weak interactions and therefore the corresponding secondary emission must be extremely faint. It turns out that even for the strongest mass concentrations in the Universe, the dark matter annihilation signal is expected to not exceed the level of competing astrophysical sources. Thus, the discrimination of the putative dark matter annihilation signal from the signals of the astrophysical inventory has become crucial for indirect search strategies. In this thesis, a novel search strategy will be developed and exemplified in which target selection across a wide range of masses, astrophysical background estimation, and multiwavelength signatures play the key role. It turns out that the uncertainties regarding the halo profile and the boost due to surviving substructure are bigger for halos at the lower end of the observed mass scales, i.e. in the regime of dwarf galaxies and below, while astrophysical backgrounds tend to become more severe for massive dark matter halos such as clusters of galaxies. By contrast, the uncertainties due to unknown details of particle physics are invariant under changes of the halo mass. Therefore, the different scaling behaviors can be employed to significantly cut down on the uncertainties in observations of different targets covering a major part of the involved mass scales. This strategical approach was implemented in the scientific program carried out with the MAGIC telescope system. Observations of dwarf galaxies and the Virgo- and Perseus clusters of galaxies have been carried out and, at the time of writing, result in some of the most stringent constraints on weakly interacting massive particles from indirect searches. Here, the low-threshold design of the MAGIC telescope system plays a crucial role, since the bulk of the high-energy photons, produced with a high multiplicity during the fragmentation of unstable dark matter annihilation products, are emitted at energies well below the dark matter mass scale. The upper limits severely constrain less generic, but more prolific scenarios characterized by extraordinarily high annihilation efficiencies.