TY - THES A1 - Herrmann, Marc T1 - The Total Variation on Surfaces and of Surfaces T1 - Die totale Variation auf Oberflächen und von Oberflächen N2 - This thesis is concerned with applying the total variation (TV) regularizer to surfaces and different types of shape optimization problems. The resulting problems are challenging since they suffer from the non-differentiability of the TV-seminorm, but unlike most other priors it favors piecewise constant solutions, which results in piecewise flat geometries for shape optimization problems.The first part of this thesis deals with an analogue of the TV image reconstruction approach [Rudin, Osher, Fatemi (Physica D, 1992)] for images on smooth surfaces. A rigorous analytical framework is developed for this model and its Fenchel predual, which is a quadratic optimization problem with pointwise inequality constraints on the surface. A function space interior point method is proposed to solve it. Afterwards, a discrete variant (DTV) based on a nodal quadrature formula is defined for piecewise polynomial, globally discontinuous and continuous finite element functions on triangulated surface meshes. DTV has favorable properties, which include a convenient dual representation. Next, an analogue of the total variation prior for the normal vector field along the boundary of smooth shapes in 3D is introduced. Its analysis is based on a differential geometric setting in which the unit normal vector is viewed as an element of the two-dimensional sphere manifold. Shape calculus is used to characterize the relevant derivatives and an variant of the split Bregman method for manifold valued functions is proposed. This is followed by an extension of the total variation prior for the normal vector field for piecewise flat surfaces and the previous variant of split Bregman method is adapted. Numerical experiments confirm that the new prior favours polyhedral shapes. N2 - Die vorliegende Arbeit beschäftigt sich mit der Anwendung der totalen Variation (TV) als Regularisierung auf Oberflächen und in verschiedenen Problemen der Formoptimierung. Die daraus entstehenden Optimierungsprobleme sind aufgrund der TV-Seminorm nicht differenzierbar und daher eine Herausforderung. Allerdings werden dadurch, im Gegensatz zu anderen Regularisierungen, stückweise konstante Lösungen favorisiert. Dies führt bei Problemen der Formoptimierung zu stückweise flachen Geometrien. Der erste Teil dieser Arbeit widmet sich der Erweiterung des Ansatzes zur mathematischen Bildverarbeitung [Rudin, Osher, Fatemi (Physica D, 1992)] von flachen Bildern auf glatte Oberflächen und deren Texturen. Für das damit verbundene Optimierungsproblem wird das Fenchel präduale Problem hergeleitet. Dies ist ein quadratisches Optimierungsproblem mit Ungleichungsrestriktionen für dessen Lösung ein Innere-Punkte-Verfahren in Funktionenräumen vorgestellt wird. Basierend auf einer Quadraturformel, wird im Anschluss eine diskrete Variante (DTV) der TV-Seminorm für global unstetige und stetige Finite- Elemente-Funktionen auf triangulierten Oberflächen entwickelt. (DTV) besitzt positive Eigenschaften, wie eine praktische duale Darstellung. Im letzten Teil wird zuerst ein TV-Analogon für die Oberflächennormale von glatten Formen in 3D gezeigt und mit Hilfe von Differentialgeometrie analysiert. Danach wird eine mögliche Erweiterungen für stückweise glatte Oberflächen vorgestellt. Zur Lösung von beiden Regularisierungen wird eine Variante des Split-Bregman-Verfahrens für Funktionen mitWerten auf Mannigfaltigkeiten benutzt. KW - Gestaltoptimierung KW - optimization KW - total variation KW - Formoptimierung KW - Shape Optimization KW - Optimierung KW - Totale Variation KW - Finite-Elemente-Methode Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-240736 ER - TY - JOUR A1 - Konijnenberg, Mark A1 - Herrmann, Ken A1 - Kobe, Carsten A1 - Verburg, Frederik A1 - Hindorf, Cecilia A1 - Hustinx, Roland A1 - Lassmann, Michael T1 - EANM position paper on article 56 of the Council Directive 2013/59/Euratom (basic safety standards) for nuclear medicine therapy JF - European Journal of Nuclear Medicine and Molecular Imaging N2 - The EC Directive 2013/59/Euratom states in article 56 that exposures of target volumes in nuclear medicine treatments shall be individually planned and their delivery appropriately verified. The Directive also mentions that medical physics experts should always be appropriately involved in those treatments. Although it is obvious that, in nuclear medicine practice, every nuclear medicine physician and physicist should follow national rules and legislation, the EANM considered it necessary to provide guidance on how to interpret the Directive statements for nuclear medicine treatments. For this purpose, the EANM proposes to distinguish three levels in compliance to the optimization principle in the directive, inspired by the indication of levels in prescribing, recording and reporting of absorbed doses after radiotherapy defined by the International Commission on Radiation Units and Measurements (ICRU): Most nuclear medicine treatments currently applied in Europe are standardized. The minimum requirement for those treatments is ICRU level 1 (“activity-based prescription and patient-averaged dosimetry”), which is defined by administering the activity within 10% of the intended activity, typically according to the package insert or to the respective EANM guidelines, followed by verification of the therapy delivery, if applicable. Non-standardized treatments are essentially those in developmental phase or approved radiopharmaceuticals being used off-label with significantly (> 25% more than in the label) higher activities. These treatments should comply with ICRU level 2 (“activity-based prescription and patient-specific dosimetry”), which implies recording and reporting of the absorbed dose to organs at risk and optionally the absorbed dose to treatment regions. The EANM strongly encourages to foster research that eventually leads to treatment planning according to ICRU level 3 (“dosimetry-guided patient-specific prescription and verification”), whenever possible and relevant. Evidence for superiority of therapy prescription on basis of patient-specific dosimetry has not been obtained. However, the authors believe that a better understanding of therapy dosimetry, i.e. how much and where the energy is delivered, and radiobiology, i.e. radiation-related processes in tissues, are keys to the long-term improvement of our treatments. KW - nuclear medicine therapy KW - dosimetry KW - optimization KW - BSS directive Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-235280 SN - 1619-7070 VL - 48 ER - TY - THES A1 - Moldovan, Christian T1 - Performance Modeling of Mobile Video Streaming T1 - Leistungsmodellierung von mobilem Videostreaming N2 - In the past two decades, there has been a trend to move from traditional television to Internet-based video services. With video streaming becoming one of the most popular applications in the Internet and the current state of the art in media consumption, quality expectations of consumers are increasing. Low quality videos are no longer considered acceptable in contrast to some years ago due to the increased sizes and resolution of devices. If the high expectations of the users are not met and a video is delivered in poor quality, they often abandon the service. Therefore, Internet Service Providers (ISPs) and video service providers are facing the challenge of providing seamless multimedia delivery in high quality. Currently, during peak hours, video streaming causes almost 58\% of the downstream traffic on the Internet. With higher mobile bandwidth, mobile video streaming has also become commonplace. According to the 2019 Cisco Visual Networking Index, in 2022 79% of mobile traffic will be video traffic and, according to Ericsson, by 2025 video is forecasted to make up 76% of total Internet traffic. Ericsson further predicts that in 2024 over 1.4 billion devices will be subscribed to 5G, which will offer a downlink data rate of 100 Mbit/s in dense urban environments. One of the most important goals of ISPs and video service providers is for their users to have a high Quality of Experience (QoE). The QoE describes the degree of delight or annoyance a user experiences when using a service or application. In video streaming the QoE depends on how seamless a video is played and whether there are stalling events or quality degradations. These characteristics of a transmitted video are described as the application layer Quality of Service (QoS). In general, the QoS is defined as "the totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service" by the ITU. The network layer QoS describes the performance of the network and is decisive for the application layer QoS. In Internet video, typically a buffer is used to store downloaded video segments to compensate for network fluctuations. If the buffer runs empty, stalling occurs. If the available bandwidth decreases temporarily, the video can still be played out from the buffer without interruption. There are different policies and parameters that determine how large the buffer is, at what buffer level to start the video, and at what buffer level to resume playout after stalling. These have to be finely tuned to achieve the highest QoE for the user. If the bandwidth decreases for a longer time period, a limited buffer will deplete and stalling can not be avoided. An important research question is how to configure the buffer optimally for different users and situations. In this work, we tackle this question using analytic models and measurement studies. With HTTP Adaptive Streaming (HAS), the video players have the capability to adapt the video bit rate at the client side according to the available network capacity. This way the depletion of the video buffer and thus stalling can be avoided. In HAS, the quality in which the video is played and the number of quality switches also has an impact on the QoE. Thus, an important problem is the adaptation of video streaming so that these parameters are optimized. In a shared WiFi multiple video users share a single bottleneck link and compete for bandwidth. In such a scenario, it is important that resources are allocated to users in a way that all can have a similar QoE. In this work, we therefore investigate the possible fairness gain when moving from network fairness towards application-layer QoS fairness. In mobile scenarios, the energy and data consumption of the user device are limited resources and they must be managed besides the QoE. Therefore, it is also necessary, to investigate solutions, that conserve these resources in mobile devices. But how can resources be conserved without sacrificing application layer QoS? As an example for such a solution, this work presents a new probabilistic adaptation algorithm that uses abandonment statistics for ts decision making, aiming at minimizing the resource consumption while maintaining high QoS. With current protocol developments such as 5G, bandwidths are increasing, latencies are decreasing and networks are becoming more stable, leading to higher QoS. This allows for new real time data intensive applications such as cloud gaming, virtual reality and augmented reality applications to become feasible on mobile devices which pose completely new research questions. The high energy consumption of such applications still remains an issue as the energy capacity of devices is currently not increasing as quickly as the available data rates. In this work we compare the optimal performance of different strategies for adaptive 360-degree video streaming. N2 - In den vergangenen zwei Jahrzehnten gab es einen starken Trend weg vom traditionellen Fernsehen hin zum Videostreaming über das Internet. Dabei macht Videostreaming zurzeit den größten Anteil des gesamten Internetverkehrs aus. Beim Herunterladen eines Internetvideos wird das Video vor dem Ausspielen in einem Puffer beim Client zwischengespeichert, um Netzfluktuationen zu kompensieren. Leert sich der Puffer, so muss das Video stoppen (Stalling), um Daten nachzuladen. Um dies zu verhindern, müssen Pufferstrategien und -Parameter optimal an Nutzerszenarien angepasst sein. Mit diesem Problem beschäftigen wir uns im ersten Kapitel dieser Arbeit unter Anwendung von Wartschlangenmodelle, numerische Simulationen und Messstudien. Zur Bewertung der Güte eines Videostreams nutzen wir ein Modell, das auf subjektiven Studien basiert. Mit HTTP Adaptive Streaming hat der Videoplayer die Fähigkeit, Videosegmente in einer an die Bandbreite angepasster Bitrate und somit auch angepasster Qualität anzufordern. Somit kann die Leerung des Puffers gebremst und Stalling verhindert werden. Allerdings hat neben Stalling auch die Videoqualität und die Anzahl der Qualitätswechsel Auswirkungen auf die Zufriedenheit der Zuschauer. Inwiefern diese Parameter optimiert werden können, untersuchen wir im zweiten Kapitel mit Hilfe von linearen und quadratischen Programmen sowie einem Warteschlangenmodell. Hierbei untersuchen wie auch die Fairness in Netzen mit mehreren Nutzern und 360-Grad Videos. Im dritten Kapitel untersuchen wir Möglichkeiten, Videostreaming ressourcenschonender zu gestalten. Hierzu untersuchen wir in einer Feldstudie die Möglichkeit Caches an WiFi-Hotspots einzusetzen und somit redundanten Verkehr zu reduzieren. Wir untersuchen das Verhalten von mobilen Videonutzern, indem wir eine Nutzerstudie auswerten. Außerdem stellen wir einen neuen Adaptionsalgorithmus vor, der abhängig vom Nutzerverhalten den Datenverbrauch und Stromverbrauch des Videostreams reduziert. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/20 KW - Videoübertragung KW - Quality of Experience KW - Dienstgüte KW - Leistungsbewertung KW - Mathematisches Modell KW - video streaming KW - performance modeling KW - optimization Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-228715 SN - 1432-8801 ER - TY - JOUR A1 - Terekhov, Maxim A1 - Elabyad, Ibrahim A. A1 - Schreiber, Laura M. T1 - Global optimization of default phases for parallel transmit coils for ultra-high-field cardiac MRI JF - PLoS One N2 - The development of novel multiple-element transmit-receive arrays is an essential factor for improving B\(_1\)\(^+\) field homogeneity in cardiac MRI at ultra-high magnetic field strength (B\(_0\) > = 7.0T). One of the key steps in the design and fine-tuning of such arrays during the development process is finding the default driving phases for individual coil elements providing the best possible homogeneity of the combined B\(_1\)\(^+\)-field that is achievable without (or before) subject-specific B\(_1\)\(^+\)-adjustment in the scanner. This task is often solved by time-consuming (brute-force) or by limited efficiency optimization methods. In this work, we propose a robust technique to find phase vectors providing optimization of the B-1-homogeneity in the default setup of multiple-element transceiver arrays. The key point of the described method is the pre-selection of starting vectors for the iterative solver-based search to maximize the probability of finding a global extremum for a cost function optimizing the homogeneity of a shaped B\(_1\)\(^+\)-field. This strategy allows for (i) drastic reduction of the computation time in comparison to a brute-force method and (ii) finding phase vectors providing a combined B\(_1\)\(^+\)-field with homogeneity characteristics superior to the one provided by the random-multi-start optimization approach. The method was efficiently used for optimizing the default phase settings in the in-house-built 8Tx/16Rx arrays designed for cMRI in pigs at 7T. KW - optimization KW - magnetic resonance imaging KW - power grids KW - swine KW - electromagnetics KW - linear regression analysis KW - thorax KW - wave interference Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-265737 VL - 16 IS - 8 ER -