Refine
Has Fulltext
- yes (21)
Is part of the Bibliography
- yes (21)
Year of publication
Document Type
- Doctoral Thesis (21)
Keywords
- Optimierung (21) (remove)
Humane artifizielle Vollhautmodelle gewinnen im Bereich des Tissue Engineerings zunehmend an Bedeutung und werden mittlerweile in vielen verschiedenen Fachbereichen erforscht, optimiert und sogar als die Grundlagenforschung unterstützende Tierersatzmodelle angewendet. Dieses geht mit hohen Ansprüchen an Qualität und Reproduzierbarkeit dergleichen einher. In der vorliegenden Arbeit wurde erstmals der Einfluss von Kulturbedingen und Spendermaterial auf die Qualität humaner in vitro hergestellter Vollhautmodelle systematisch untersucht. Dazu wurde zunächst ein Katalog an histomorphologischen Qualitätskriterien erarbeitet, der sich an echten humanen Hautbiopsien orientierte und eine Gewichtung dieser Kriterien im Hinblick auf die Verwendung als echte Hautersatzmodelle erlaubte. Für die Herstellung der Hautmodelle wurden die etablierten Medien KGM 2 , KGM 2 variant und EpiLife ® und deren Kultivierungsprotokolle verwendet. Die zelluläre Grundlage der vorliegenden Untersuchungen bildeten die Präputien von sechzehn Kindern nach Zirkumzision. Keratinozyten und Fibroblasten wurden isoliert und mit den drei oben genannten Medien und zugrundeliegenden Kultivierungsprotokollen wurden in jeweils dreifacher Ausführung insgesamt 144 humane Vollhautmodelle erstellt, welche dann entsprechend des Bewertungskataloges beurteilt wurden. Die zugrunde gelegten Bewertungs- und Gütekriterien entsprachen histomorphologischen Parametern. Dazu gehörten die Dicke von Epidermis und Dermis, die Adhärenz zwischen Epidermis und Dermis sowie die Abwesenheit von Zellkernen im Stratum corneum der Epidermis.
Für die Analyse der Einflussfaktoren Spenderalter und Kultivierungsmedium wurden Regressionsmodelle mittels Generalized Estimating Equations angewandt. Das Spenderalter und das Kultivierungsmedium wurden dabei unabhängig voneinander in einer univariaten Analyse untersucht. Bei der Untersuchung des Einflusses des Kulturmediums auf die terminale Differenzierung innerhalb der Epidermis zeigte sich, dass durch Kultivierung mit EpiLife ® signifikant weniger Vollhautmodelle mit Zellkernen im Stratum corneum hergestellt wurden, im Vergleich zur Kultur mit KGM 2 oder KGM 2 variant. Der Einfluss des Kulturmediums auf die Epidermis- und Dermis-Dicke war jeweils nicht signifikant. Trotzdem zeigte sich ein Trend mit einer dünneren Epidermis und Dermis nach EpiLife ® -Kultivierung. Bei der Analyse des Spenderalters konnte ein positiver Einfluss eines jüngeren Spenders auf die Dicke der Epidermis im Vollhautmodell gezeigt werden. Die Epidermis-Dicke war signifikant größer, je jünger ein Vorhautspender war. Ein höheres Spenderalter dagegen führte zu signifikant weniger Ablösung der Epidermis von der Dermis. Keinen Einfluss hatte das Spenderalter auf die Dermis-Dicke und auf die Abwesenheit von Zellkernen in der Hornschicht. Die drei signifikanten Assoziationen in der univariaten Analyse wurden in einer multivariablen Analyse untersucht. Hierbei zeigte sich der Einfluss des Spenderalters auf die Epidermis-Dicke und die dermo-epidermale Adhäsion unter Einfluss der Kulturmedien, der Abwesenheit von Zellkernen in der Hornschicht und der Dermis-Dicke als Kovariablen ebenfalls signifikant. Auch blieb der Einfluss von EpiLife ® auf die Abwesenheit von Zellkernen in der Hornschicht in der multivariablen Analyse signifikant. Es konnte hierbei außerdem ein signifikanter Einfluss der Dermis auf die Epidermis mit Schrumpfung der Epidermis bei Größerwerden der Dermis gezeigt werden. In einer durchgeführten komplexen statistischen Analyse mittels General Linear Model wurde der Einfluss einer Spender-Medium-Interaktion analysiert, ohne das Spenderalter als Variable mit einzubeziehen. Es zeigte sich ein signifikanter Einfluss der Interaktion des Spenders mit dem Kulturmedium auf die Epidermisund Dermis-Dicke und damit auf die Qualität der in vitro hergestellten Vollhautmodelle. Einerseits bestand also ein unabhängiger Einfluss des Spenderalters und des Mediums, andererseits gab es einen Einfluss von der Abhängigkeit einer optimalen Spender-Medium-Kombination auf die Vollhautmodellqualität.
Zusammenfassend konnte in der vorliegenden Arbeit erstmals das komplexe Zusammenspiel von Spenderfaktoren und Kultivierungsbedingungen und deren Auswirkungen auf die Qualität von humanen Vollhautmodellen aufgezeigt werden. Diese Ergebnisse haben Relevanz für den Einsatz dieser Modelle als Tierersatzmodelle in der Forschung. Unter Berücksichtigung dieser Ergebnisse können optimierte organotypische Vollhautmodelle in vitro hergestellt werden, sodass zukünftig komplexere Hautmodelle generiert werden können. In einer Folgearbeit sollen die hier erarbeiteten Grundlagen helfen, Hautmodelle in der Erforschung der akuten GvHD der Haut zu bearbeiten.
Lagrange Multiplier Methods for Constrained Optimization and Variational Problems in Banach Spaces
(2018)
This thesis is concerned with a class of general-purpose algorithms for constrained minimization problems, variational inequalities, and quasi-variational inequalities in Banach spaces.
A substantial amount of background material from Banach space theory, convex analysis, variational analysis, and optimization theory is presented, including some results which are refinements of those existing in the literature. This basis is used to formulate an augmented Lagrangian algorithm with multiplier safeguarding for the solution of constrained optimization problems in Banach spaces. The method is analyzed in terms of local and global convergence, and many popular problem classes such as nonlinear programming, semidefinite programming, and function space optimization are shown to be included as special cases of the general setting.
The algorithmic framework is then extended to variational and quasi-variational inequalities, which include, by extension, Nash and generalized Nash equilibrium problems. For these problem classes, the convergence is analyzed in detail. The thesis then presents a rich collection of application examples for all problem classes, including implementation details and numerical results.
In the future Internet, the people-centric communication paradigm will be complemented by a ubiquitous communication among people and devices, or even a communication between devices. This comes along with the need for a more flexible, cheap, widely available Internet access. Two types of wireless networks are considered most appropriate for attaining those goals. While wireless sensor networks (WSNs) enhance the Internet’s reach by providing data about the properties of the environment, wireless mesh networks (WMNs) extend the Internet access possibilities beyond the wired backbone. This monograph contains four chapters which present modeling and optimization methods for WSNs and WMNs. Minimizing energy consumptions is the most important goal of WSN optimization and the literature consequently provides countless energy consumption models. The first part of the monograph studies to what extent the used energy consumption model influences the outcome of analytical WSN optimizations. These considerations enable the second contribution, namely overcoming the problems on the way to a standardized energy-efficient WSN communication stack based on IEEE 802.15.4 and ZigBee. For WMNs both problems are of minor interest whereas the network performance has a higher weight. The third part of the work, therefore, presents algorithms for calculating the max-min fair network throughput in WMNs with multiple link rates and Internet gateway. The last contribution of the monograph investigates the impact of the LRA concept which proposes to systematically assign more robust link rates than actually necessary, thereby allowing to exploit the trade-off between spatial reuse and per-link throughput. A systematical study shows that a network-wide slightly more conservative LRA than necessary increases the throughput of a WMN where max-min fairness is guaranteed. It moreover turns out that LRA is suitable for increasing the performance of a contention-based WMN and is a valuable optimization tool.
In this work, multi-particle quantum optimal control problems are studied in the framework of time-dependent density functional theory (TDDFT).
Quantum control problems are of great importance in both fundamental research and application of atomic and molecular systems. Typical applications are laser induced chemical reactions, nuclear magnetic resonance experiments, and quantum computing.
Theoretically, the problem of how to describe a non-relativistic system of multiple particles is solved by the Schrödinger equation (SE). However, due to the exponential increase in numerical complexity with the number of particles, it is impossible to directly solve the Schrödinger equation for large systems of interest. An efficient and successful approach to overcome this difficulty is the framework of TDDFT and the use of the time-dependent Kohn-Sham (TDKS) equations therein.
This is done by replacing the multi-particle SE with a set of nonlinear single-particle Schrödinger equations that are coupled through an additional potential.
Despite the fact that TDDFT is widely used for physical and quantum chemical calculation and software packages for its use are readily available, its mathematical foundation is still under active development and even fundamental issues remain unproven today.
The main purpose of this thesis is to provide a consistent and rigorous setting for the TDKS equations and of the related optimal control problems.
In the first part of the thesis, the framework of density functional theory (DFT) and TDDFT are introduced. This includes a detailed presentation of the different functional sets forming DFT. Furthermore, the known equivalence of the TDKS system to the original SE problem is further discussed.
To implement the TDDFT framework for multi-particle computations, the TDKS equations provide one of the most successful approaches nowadays. However, only few mathematical results concerning these equations are available and these results do not cover all issues that arise in the formulation of optimal control problems governed by the TDKS model.
It is the purpose of the second part of this thesis to address these issues such as higher regularity of TDKS solutions and the case of weaker requirements on external (control) potentials that are instrumental for the formulation of well-posed TDKS control problems. For this purpose, in this work, existence and uniqueness of TDKS solutions are investigated in the Galerkin framework and using energy estimates for the nonlinear TDKS equations.
In the third part of this thesis, optimal control problems governed by the TDKS model are formulated and investigated. For this purpose, relevant cost functionals that model the purpose of the control are discussed.
Henceforth, TDKS control problems result from the requirement of optimising the given cost functionals subject to the differential constraint given by the TDKS equations. The analysis of these problems is novel and represents one of the main contributions of the present thesis.
In particular, existence of minimizers is proved and their characterization by TDKS optimality systems is discussed in detail.
To this end, Fréchet differentiability of the TDKS model and of the cost functionals is addressed considering \(H^1\) cost of the control.
This part is concluded by deriving the reduced gradient in the \(L^2\) and \(H^1\) inner product.
While the \(L^2\) optimization is widespread in the literature, the choice of the \(H^1\) gradient is motivated in this work by theoretical consideration and by resulting numerical advantages.
The last part of the thesis is devoted to the numerical approximation of the TDKS optimality systems and to their solution by gradient-based optimization techniques.
For the former purpose, Strang time-splitting pseudo-spectral schemes are discussed including a review of some recent theoretical estimates for these schemes and a numerical validation of these estimates.
For the latter purpose, nonlinear (projected) conjugate gradient methods are implemented and are used to validate the theoretical analysis of this thesis with results of numerical experiments with different cost functional settings.
Today's Internet is no longer only controlled by a single stakeholder, e.g. a standard body or a telecommunications company.
Rather, the interests of a multitude of stakeholders, e.g. application developers, hardware vendors, cloud operators, and network operators, collide during the development and operation of applications in the Internet.
Each of these stakeholders considers different KPIs to be important and attempts to optimise scenarios in its favour.
This results in different, often opposing views and can cause problems for the complete network ecosystem.
One example of such a scenario are Signalling Storms in the mobile Internet, with one of the largest occurring in Japan in 2012 due to the release and high popularity of a free instant messaging application.
The network traffic generated by the application caused a high number of connections to the Internet being established and terminated.
This resulted in a similarly high number of signalling messages in the mobile network, causing overload and a loss of service for 2.5 million users over 4 hours.
While the network operator suffers the largest impact of this signalling overload, it does not control the application.
Thus, the network operator can not change the application traffic characteristics to generate less network signalling traffic.
The stakeholders who could prevent, or at least reduce, such behaviour, i.e. application developers or hardware vendors, have no direct benefit from modifying their products in such a way.
This results in a clash of interests which negatively impacts the network performance for all participants.
The goal of this monograph is to provide an overview over the complex structures of stakeholder relationships in today's Internet applications in mobile networks.
To this end, we study different scenarios where such interests clash and suggest methods where tradeoffs can be optimised for all participants.
If such an optimisation is not possible or attempts at it might lead to adverse effects, we discuss the reasons.
Röntgencomputertomographie (CT) hat in ihrer industriellen Anwendung ein sehr breites Spektrum möglicher Prüfobjekte. Ziel einer CT-Messung sind dreidimensionale Abbilder der Verteilung des Schwächungskoeffizienten der Objekte mit möglichst großer Genauigkeit. Die Parametrierung eines CT-Systems für ein optimales Messergebnis hängt stark vom zu untersuchenden Objekt ab. Eine Vorhersage der optimalen Parameter muss die physikalischen Wechselwirkungen mit Röntgenstrahlung des Objektes und des CT-Systems berücksichtigen. Die vorliegende Arbeit befasst sich damit, diese Wechselwirkungen zu modellieren und mit der Möglichkeit den Prozess zur Parametrierung anhand von Gütemaßen zu automatisieren. Ziel ist eine simulationsgetriebene, automatische Parameteroptimierungsmethode, welche die Objektabhängigkeit berücksichtigt. Hinsichtlich der Genauigkeit und der Effizienz wird die bestehende Röntgensimulationsmethodik erweitert. Es wird ein Ansatz verfolgt, der es ermöglicht, die Simulation eines CT-Systems auf reale Systeme zu kalibrieren. Darüber hinaus wird ein Modell vorgestellt, welches zur Berechnung der zweiten Ordnung der Streustrahlung im Objekt dient. Wegen des analytischen Ansatzes kann dabei auf eine Monte-Carlo Methode verzichtet werden. Es gibt in der Literatur bisher keine eindeutige Definition für die Güte eines CT-Messergebnisses. Eine solche Definition wird, basierend auf der Informationstheorie von Shannon, entwickelt. Die Verbesserungen der Simulationsmethodik sowie die Anwendung des Gütemaßes zur simulationsgetriebenen Parameteroptimierung werden in Beispielen erfolgreich angewendet beziehungsweise mittels Referenzmethoden validiert.
Future broadband wireless networks should be able to support not only best effort traffic but also real-time traffic with strict Quality of Service (QoS) constraints. In addition, their available resources are scare and limit the number of users. To facilitate QoS guarantees and increase the maximum number of concurrent users, wireless networks require careful planning and optimization. In this monograph, we studied three aspects of performance optimization in wireless networks: resource optimization in WLAN infrastructure networks, quality of experience control in wireless mesh networks, and planning and optimization of wireless mesh networks. An adaptive resource management system is required to effectively utilize the limited resources on the air interface and to guarantee QoS for real-time applications. Thereby, both WLAN infrastructure and WLAN mesh networks have to be considered. An a-priori setting of the access parameters is not meaningful due to the contention-based medium access and the high dynamics of the system. Thus, a management system is required which dynamically adjusts the channel access parameters based on the network load. While this is sufficient for wireless infrastructure networks, interferences on neighboring paths and self-interferences have to be considered for wireless mesh networks. In addition, a careful channel allocation and route assignment is needed. Due to the large parameter space, standard optimization techniques fail for optimizing large wireless mesh networks. In this monograph, we reveal that biology-inspired optimization techniques, namely genetic algorithms, are well-suitable for the planning and optimization of wireless mesh networks. Although genetic algorithms generally do not always find the optimal solution, we show that with a good parameter set for the genetic algorithm, the overall throughput of the wireless mesh network can be significantly improved while still sharing the resources fairly among the users.
Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience.
In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization.
Optimization problems with composite functions deal with the minimization of the sum
of a smooth function and a convex nonsmooth function. In this thesis several numerical
methods for solving such problems in finite-dimensional spaces are discussed, which are
based on proximity operators.
After some basic results from convex and nonsmooth analysis are summarized, a first-order
method, the proximal gradient method, is presented and its convergence properties are
discussed in detail. Known results from the literature are summarized and supplemented by
additional ones. Subsequently, the main part of the thesis is the derivation of two methods
which, in addition, make use of second-order information and are based on proximal Newton
and proximal quasi-Newton methods, respectively. The difference between the two methods
is that the first one uses a classical line search, while the second one uses a regularization
parameter instead. Both techniques lead to the advantage that, in contrast to many similar
methods, in the respective detailed convergence analysis global convergence to stationary
points can be proved without any restricting precondition. Furthermore, comprehensive
results show the local convergence properties as well as convergence rates of these algorithms,
which are based on rather weak assumptions. Also a method for the solution of the arising
proximal subproblems is investigated.
In addition, the thesis contains an extensive collection of application examples and a detailed
discussion of the related numerical results.
The Software Defined Networking (SDN) paradigm offers network operators numerous improvements in terms of flexibility, scalability, as well as cost efficiency and vendor independence. However, in order to maximize the benefit from these features, several new challenges in areas such as management and orchestration need to be addressed. This dissertation makes contributions towards three key topics from these areas.
Firstly, we design, implement, and evaluate two multi-objective heuristics for the SDN controller placement problem. Secondly, we develop and apply mechanisms for automated decision making based on the Pareto frontiers that are returned by the multi-objective optimizers. Finally, we investigate and quantify the performance benefits for the SDN control plane that can be achieved by integrating information from external entities such as Network Management Systems (NMSs) into the control loop. Our evaluation results demonstrate the impact of optimizing various parameters of softwarized networks at different levels and are used to derive guidelines for an efficient operation.