520 Astronomie und zugeordnete Wissenschaften
Refine
Has Fulltext
- yes (35)
Is part of the Bibliography
- yes (35)
Year of publication
Document Type
- Doctoral Thesis (19)
- Journal article (13)
- Preprint (2)
- Master Thesis (1)
Keywords
- Aktiver galaktischer Kern (7)
- Blazar (6)
- Gammastrahlung (6)
- MAGIC-Teleskop (5)
- Astrophysik (4)
- Radioastronomie (4)
- BL-Lacertae-Objekt (3)
- Gammaastronomie (3)
- Hydrodynamik (3)
- Mathematisches Modell (3)
Institute
The main objectives of the KM3NeT Collaboration are (i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and (ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: (1) the high-energy astrophysical neutrino signal reported by IceCube and (2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergistic opportunities for the Earth and sea sciences community. Three suitable deep-sea sites are selected, namely off-shore Toulon (France), Capo Passero (Sicily, Italy) and Pylos (Peloponnese, Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a three-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be sparsely configured to fully explore the IceCube signal with similar instrumented volume, different methodology, improved resolution and
20 years after the discovery of the Crab Nebula as a source of very high energy gamma-rays, the number of sources newly discovered above 100 GeV using ground-based Cherenkov telescopes has considerably grown, at the time of writing of this thesis to a total of 81. The sources are of different types, including galactic sources such as supernova remnants, pulsars, binary systems, or so-far unidentified accelerators and extragalactic sources such as blazars and radio galaxies. The goal of this thesis work was to search for gamma-ray emission from a particular type of blazars previously undetected at very high gamma-ray energies, by using the MAGIC telescope. Those blazars previously detected were all of the same type, the so-called high-peaked BL Lacertae objects. The sources emit purely non-thermal emission, and exhibit a peak in their radio-to-X-ray spectral energy distribution at X-ray energies. The entire blazar population extends from these rare, low-luminosity BL Lacertae objects with peaks at X-ray energies to the much more numerous, high-luminosity infrared-peaked radio quasars. Indeed, the low-peaked sources dominate the source counts obtained from space-borne observations at gamma-ray energies up to 10 GeV. Their spectra observed at lower gamma-ray energies show power-law extensions to higher energies, although theoretical models suggest them to turn over at energies below 100 GeV. This opened the quest for MAGIC as the Cherenkov telescope with the currently lowest energy threshold. In the framework of this thesis, the search was focused on the prominent sources BL Lac, W Comae and S5 0716+714, respectively. Two of the sources were unambiguously discovered at very high energy gamma-rays with the MAGIC telescope, based on the analysis of a total of about 150 hours worth of data collected between 2005 and 2008. The analysis of this very large data set required novel techniques for treating the effects of twilight conditions on the data quality. This was successfully achieved and resulted in a vastly improved performance of the MAGIC telescope in monitoring campaigns. The detections of low-peaked and intermediate-peaked BL Lac objects are in line with theoretical expectations, but push the models based on electron shock acceleration and inverse-Compton cooling to their limits. The short variability time scales of the order of one day observed at very high energies show that the gamma-rays originate rather close to the putative supermassive black holes in the centers of blazars, corresponding to less than 1000 Schwarzschild radii when taking into account relativistic bulk motion.
In the past few years, the Event Horizon Telescope (EHT) has provided the first-ever event horizon-scale images of the supermassive black holes (BHs) M87* and Sagittarius A* (Sgr A*). The next-generation EHT project is an extension of the EHT array that promises larger angular resolution and higher sensitivity to the dim, extended flux around the central ring-like structure, possibly connecting the accretion flow and the jet. The ngEHT Analysis Challenges aim to understand the science extractability from synthetic images and movies to inform the ngEHT array design and analysis algorithm development. In this work, we compare the accretion flow structure and dynamics in numerical fluid simulations that specifically target M87* and Sgr A*, and were used to construct the source models in the challenge set. We consider (1) a steady-state axisymmetric radiatively inefficient accretion flow model with a time-dependent shearing hotspot, (2) two time-dependent single fluid general relativistic magnetohydrodynamic (GRMHD) simulations from the H-AMR code, (3) a two-temperature GRMHD simulation from the BHAC code, and (4) a two-temperature radiative GRMHD simulation from the KORAL code. We find that the different models exhibit remarkably similar temporal and spatial properties, except for the electron temperature, since radiative losses substantially cool down electrons near the BH and the jet sheath, signaling the importance of radiative cooling even for slowly accreting BHs such as M87*. We restrict ourselves to standard torus accretion flows, and leave larger explorations of alternate accretion models to future work.
Our universe may have started by Qubit decoherence:
In quantum computers, qubits have all their states undefined during calculation and become defined as output (“decoherence”). We study the transition from an uncontrolled, chaotic quantum vacuum (“before”) to a clearly interacting “real world”. In such a cosmology, the Big Bang singularity is replaced by a condensation event of interacting strings. This triggers a crystallization process. This avoids inflation, not fitting current observations: increasing long-range interactions limit growth and crystal symmetries ensure the same laws of nature and basic symmetries over the whole crystal. Tiny mis-arrangements provide nuclei of superclusters and galaxies and crystal structure allows arrangement of dark (halo regions) and normal matter (galaxy nuclei) for galaxy formation. Crystals come and go: an evolutionary cosmology is explored: entropic forces from the quantum soup “outside” of the crystal try to dissolve it. This corresponds to dark energy and leads to a “big rip” in 70 Gigayears. Selection for best growth and condensation events over generations of crystals favors multiple self-organizing processes within the crystal including life or even conscious observers in our universe. Philosophically this theory shows harmony with nature and replaces absurd perspectives of current cosmology.
Independent of cosmology, we suggest that a “real world” (so our everyday macroscopic world) happens only inside a crystal. “Outside” there is wild quantum foam and superposition of all possibilities. In our crystallized world the vacuum no longer boils but is cooled down by the crystallization event, space-time exists and general relativity holds. Vacuum energy becomes 10**20 smaller, exactly as observed in our everyday world. We live in a “solid” state, within a crystal, the n quanta which build our world have all their different m states nicely separated. There are only nm states available for this local “multiverse”. The arrow of entropy for each edge of the crystal forms one fate, one world-line or clear development of our world, while layers of the crystal are different system states. Mathematical leads from loop quantum gravity (LQG) point to required interactions and potentials. Interaction potentials for strings or loop quanta of any dimension allow a solid, decoherent state of quanta challenging to calculate. However, if we introduce here the heuristic that any type of physical interaction of strings corresponds just to a type of calculation, there is already since 1898 the Hurwitz theorem showing that then only 1D, 2D, 4D and 8D (octonions) allow complex or hypercomplex number calculations. No other hypercomplex numbers and hence dimensions or symmetries are possible to allow calculations without yielding divisions by zero. However, the richest solution allowed by the Hurwitz theorem, octonions, is actually the observed symmetry of our universe, E8. Standard physics such as condensation, crystallization and magnetization but also solid-state physics and quantum computing allow us to show an initial mathematical treatment of our new theory by LQG to describe the cosmological state transformations by equations, and, most importantly, point out routes to parametrization of free parameters looking at testable phenomena, experiments and formulas that describe processes of crystallization, protein folding, magnetization, solid-state physics and quantum computing. This is presented here for LQG, for string theory it would be more elegant but was too demanding to be shown here.
Note: While my previous Opus server preprint “A new cosmology of a crystallization process (decoherence) from the surrounding quantum soup provides heuristics to unify general relativity and quantum physics by solid state physics” (https://doi.org/10.25972/OPUS-23076) deals with the same topics and basic formulas, this new version is improved: clearer in title, better introduction, more stringent in its mathematics and improved discussion of the implications including quantum computing, hints for parametrization and connections to LQG and other current cosmological efforts.
This 5th of June 2021 version is again an OPUS preprint, but this will next be edited for Archives https://arxiv.org.
In this view point we do not change cosmology after the hot fireball starts (hence agrees well with observation), but the changed start suggested and resulting later implications lead to an even better fit with current observations (voids, supercluster and galaxy formation; matter and no antimatter) than the standard model with big bang and inflation: In an eternal ocean of qubits, a cluster of qubits crystallizes to defined bits. The universe does not jump into existence (“big bang”) but rather you have an eternal ocean of qubits in free super-position of all their quantum states (of any dimension, force field and particle type) as permanent basis. The undefined, boiling vacuum is the real “outside”, once you leave our everyday universe. A set of n Qubits in the ocean are “liquid”, in very undefined state, they have all their m possibilities for quantum states in free superposition. However, under certain conditions the qubits interact, become defined, and freeze out, crystals form and give rise to a defined, real world with all possible time series and world lines. GR holds only within the crystal. In our universe all n**m quantum possibilities are nicely separated and crystallized out to defined bit states: A toy example with 6 qubits each having 2 states illustrates, this is completely sufficient to encode space using 3 bits for x,y and z, 1 bit for particle type and 2 bits for its state. Just by crystallization, space, particles and their properties emerge from the ocean of qubits, and following the arrow of entropy, time emerges, following an arrow of time and expansion from one corner of the toy universe to everywhere else. This perspective provides time as emergent feature considering entropy: crystallization of each world line leads to defined world lines over their whole existence, while entropy ensures direction of time and higher representation of high entropy states considering the whole crystal and all slices of world lines. The crystal perspective is also economic compared to the Everett-type multiverse, each qubit has its m quantum states and n qubits interacting forming a crystal and hence turning into defined bit states has only n**m states and not more states. There is no Everett-type world splitting with every decision but rather individual world trajectories reside in individual world layers of the crystal. Finally, bit-separated crystals come and go in the qubit ocean, selecting for the ability to lay seeds for new crystals. This self-organizing reproduction selects over generations also for life-friendliness. Mathematical treatment introduces quantum action theory as a framework for a general lattice field theory extending quantum chromo dynamics where scalar fields for color interaction and gravity have to be derived from the permeating qubit-interaction field. Vacuum energy should get appropriately low by the binding properties of the qubit crystal. Connections to loop quantum gravity, string theory and emergent gravity are discussed. Standard physics (quantum computing; crystallization, solid state physics) allow validation tests of this perspective and will extend current results.
In this work, we consider impulsive dynamical systems evolving on an infinite-dimensional space and subjected to external perturbations. We look for stability conditions that guarantee the input-to-state stability for such systems. Our new dwell-time conditions allow the situation, where both continuous and discrete dynamics can be unstable simultaneously. Lyapunov like methods are developed for this purpose. Illustrative finite and infinite dimensional examples are provided to demonstrate the application of the main results. These examples cannot be treated by any other published approach and demonstrate the effectiveness of our results.
In this work, we studied in great detail how the unknown parameters of the SUSY seesaw model can be determined from measurements of observables at or below collider energies, namely rare flavor violating decays of leptons, slepton pair production processes at linear colliders and slepton mass differences. This is a challenging task as there is an intricate dependence of the observables on the unknown seesaw, light neutrino and mSUGRA parameters. In order to separate these different influences, we first considered two classes of seesaw models, namely quasi-degenerate and strongly hierarchical right-handed neutrinos. As a generalisation, we presented a method that can be used to reconstruct the high energy seesaw parameters, among them the heavy right-handed neutrino masses, from low energy observables alone.
Blazars are among the most luminous sources in the universe. Their extreme short-time variability indicates emission processes powered by a supermassive black hole. With the current generation of Imaging Air Cherenkov Telescopes, these sources are explored at very high energies. Lowering the threshold below 100 GeV and improving the sensitivity of the telescopes, more and more blazars are discovered in this energy regime. For the MAGIC telescope, a low energy analysis has been developed allowing to reach energies of 50 GeV for the first time. The method is presented in this thesis at the example of PG 1553+113 measuring a spectrum between 50 GeV and 900 GeV. In the energy regime observed by MAGIC, strong attenuation of the gamma-rays is expected from pair production due to interactions of gamma-rays with low-energy photons from the extragalactic background light. For PG 1553+113, this provides the possibility to constrain the redshift of the source, which is still unknown. Well studied from radio to x-ray energies, PG 1553+113 was discovered in 2005 in the very high energy regime. In total, it was observed with the MAGIC telescope for 80~hours between April 2005 and April 2007. From more than three years of data taking, the MAGIC telescope provides huge amounts of data and a large number of files from various sources. To handle this data volume and to provide monitoring of the data quality, an automatic procedure is essential. Therefore, a concept for automatic data processing and management has been developed. Thanks to its flexibility, the concept is easily applicable to future projects. The implementation of an automatic analysis is running stable since three years in the data center in Würzburg and provides consistent results of all MAGIC data, i.e. equal processing ensures comparability. In addition, this database controlled system allows for easy tests of new analysis methods and re-processing of all data with a new software version at the push of a button. At any stage, not only the availability of the data and its processing status is known, but also a large set of quality parameters and results can be queried from the database, facilitating quality checks, data selection and continuous monitoring of the telescope performance. By using the automatic analysis, the whole data sample can be analyzed in a reasonable amount of time, and the analyzers can concentrate on interpreting the results instead. For PG 1553+113, the tools and results of the automatic analysis were used. Compared to the previously published results, the software includes improvements as absolute pointing correction, absolute light calibration and improved quality and background-suppression cuts. In addition, newly developed analysis methods taking into account timing information were used. Based on the automatically produced results, the presented analysis was enhanced using a special low energy analysis. Part of the data were affected by absorption due to the Saharan Air Layer, i.e. sanddust in the atmosphere. Therefore, a new method has been developed, correcting for the effect of this meteorological phenomenon. Applying the method, the affected data could be corrected for apparent flux variations and effects of absorption on the spectrum, allowing to use the result for further studies. This is especially interesting, as these data were taken during a multi-wavelength campaign. For the whole data sample of 54 hours after quality checks, a signal from the position of PG 1553+113 was found with a significance of 15 standard deviations. Fitting a power law to the combined spectrum between 75 GeV and 900 GeV, yields a spectral slope of 4.1 +/- 0.2. Due to the low energy analysis, the spectrum could be extended to below 50 GeV. Fitting down to 48 GeV, the flux remains the same, but the slope changes to 3.7 +/- 0.1. The determined daily light curve shows that the integral flux above 150 GeV is consistent with a constant flux. Also for the spectral shape no significant variability was found in three years of observations. In July 2006, a multi-wavelength campaign was performed. Simultaneous data from the x-ray satellite Suzaku, the optical telescope KVA and the two Cherenkov experiments MAGIC and H.E.S.S. are available. Suzaku measured for the first time a spectrum up to 30 keV. The source was found to be at an intermediate flux level compared to previous x-ray measurements, and no short time variability was found in the continuous data sample of 41.1 ksec. Also in the gamma regime, no variability was found during the campaign. Assuming a maximum slope of 1.5 for the intrinsic spectrum, an upper limit of z < 0.74 was determined by deabsorbing the measured spectrum for the attenuation of photons by the extragalactic background light. For further studies, a redshift of z = 0.3 was assumed. Collecting various data from radio, infrared, optical, ultraviolet, x-ray and gama-ray energies, a spectral energy distribution was determined, including the simultaneous data of the multi-wavelength campaign. Fitting the simultaneous data with different synchrotron-self-compton models shows that the observed spectral shape can be explained with synchrotron-self-compton processes. The best result was obtained with a model assuming a log-parabolic electron distribution.
The observation of electromagnetic counterparts to both high energy neutrinos and gravitational waves marked the beginning of a new era in astrophysics. The multi-messenger approach allows us to gain new insights into the most energetic events in the Universe such as gamma-ray bursts, supernovas, and black hole mergers. Real-time multi-messenger alerts are the key component of the observational strategies to unravel the transient signals expected from astrophysical sources. Focusing on the high-energy regime, we present a historical perspective of multi-messenger observations, the detectors and observational techniques used to study them, the status of the multi-messenger alerts and the most significant results, together with an overview of the future prospects in the field.
Indirect Search for Dark Matter in the Universe - the Multiwavelength and Multiobject Approach
(2011)
Cold dark matter constitutes a basic tenet of modern cosmology, essential for our understanding of structure formation in the Universe. Since its first discovery by means of spectroscopic observations of the dynamics of the Coma cluster some 80 years ago, mounting evidence of its gravitational pull and its impact on the geometry of space-time has build up across a wide range of scales, from galaxies to the entire Hubble flow. The apparent lack of electromagnetic coupling and independent measurements of the energy density of baryonic matter from the primordial abundances of light elements show the non-baryonic nature of dark matter, and its clustering properties prove that it is cold, i.e. that it has a temperature lower than its mass during the time of radiation-matter equality. A generic particle candidate for cold dark matter are weakly interacting massive particles at the electroweak symmetry-breaking scale, such as the neutralinos in R-parity conserving supersymmetry. Such particles would naturally freeze-out with a cosmologically relevant relic density at early times in the expanding Universe. Subsequent clustering of matter would recover annihilation interactions between the dark matter particles to some extent and thus lead to potentially observable high-energy emission from the decaying unstable secondaries produced in annihilation events. The spectra of the secondaries would permit a determination of the mass and annihilation cross section, which are crucial for the microphysical identification of the dark matter. This the central motivation for indirect dark matter searches. However, presently neither the indirect searches, nor the complementary direct searches based on the detection of elastic scattering events, nor the production of candidate particles in collider experiments, has yet provided unequivocal evidence for dark matter. This does not come as a surprise, since the dark matter particles interact only through weak interactions and therefore the corresponding secondary emission must be extremely faint. It turns out that even for the strongest mass concentrations in the Universe, the dark matter annihilation signal is expected to not exceed the level of competing astrophysical sources. Thus, the discrimination of the putative dark matter annihilation signal from the signals of the astrophysical inventory has become crucial for indirect search strategies. In this thesis, a novel search strategy will be developed and exemplified in which target selection across a wide range of masses, astrophysical background estimation, and multiwavelength signatures play the key role. It turns out that the uncertainties regarding the halo profile and the boost due to surviving substructure are bigger for halos at the lower end of the observed mass scales, i.e. in the regime of dwarf galaxies and below, while astrophysical backgrounds tend to become more severe for massive dark matter halos such as clusters of galaxies. By contrast, the uncertainties due to unknown details of particle physics are invariant under changes of the halo mass. Therefore, the different scaling behaviors can be employed to significantly cut down on the uncertainties in observations of different targets covering a major part of the involved mass scales. This strategical approach was implemented in the scientific program carried out with the MAGIC telescope system. Observations of dwarf galaxies and the Virgo- and Perseus clusters of galaxies have been carried out and, at the time of writing, result in some of the most stringent constraints on weakly interacting massive particles from indirect searches. Here, the low-threshold design of the MAGIC telescope system plays a crucial role, since the bulk of the high-energy photons, produced with a high multiplicity during the fragmentation of unstable dark matter annihilation products, are emitted at energies well below the dark matter mass scale. The upper limits severely constrain less generic, but more prolific scenarios characterized by extraordinarily high annihilation efficiencies.