Refine
Has Fulltext
- yes (18)
Is part of the Bibliography
- yes (18)
Year of publication
Document Type
- Doctoral Thesis (16)
- Journal article (2)
Keywords
- Netzwerk (18) (remove)
The subject of this thesis is the controllability of interconnected linear systems, where the interconnection parameter are the control variables. The study of accessibility and controllability of bilinear systems is closely related to their system Lie algebra. In 1976, Brockett classified all possible system Lie algebras of linear single-input, single-output (SISO) systems under time-varying output feedback. Here, Brockett's results are generalized to networks of linear systems, where time-varying output feedback is applied according to the interconnection structure of the network. First, networks of linear SISO systems are studied and it is assumed that all interconnections are independently controllable. By calculating the system Lie algebra it is shown that accessibility of the controlled network is equivalent to the strong connectedness of the underlying interconnection graph in case the network has at least three subsystems. Networks with two subsystems are not captured by these proofs. Thus, we give results for this particular case under additional assumption either on the graph structure or on the dynamics of the node systems, which are both not necessary. Additionally, the system Lie algebra is studied in case the interconnection graph is not strongly connected. Then, we show how to adapt the ideas of proof to networks of multi-input, multi-output (MIMO) systems. We generalize results for the system Lie algebra on networks of MIMO systems both under output feedback and under restricted output feedback. Moreover, the case with generalized interconnections is studied, i.e. parallel edges and linear dependencies in the interconnection controls are allowed. The new setting demands to distinguish between homogeneous and heterogeneous networks. With this new setting only sufficient conditions can be found to guarantee accessibility of the controlled network. As an example, networks with Toeplitz interconnection structure are studied.
With the introduction of OpenFlow by the Stanford University in 2008, a process began in the area of network research, which questions the predominant approach of fully distributed network control. OpenFlow is a communication protocol that allows the externalization of the network control plane from the network devices, such as a router, and to realize it as a logically-centralized entity in software. For this concept, the term "Software Defined Networking" (SDN) was coined during scientific discourse.
For the network operators, this concept has several advantages. The two most important can be summarized under the points cost savings and flexibility. Firstly, it is possible through the uniform interface for network hardware ("Southbound API"), as implemented by OpenFlow, to combine devices and software from different manufacturers, which increases the innovation and price pressure on them. Secondly, the realization of the network control plane as a freely programmable software with open interfaces ("Northbound API") provides the opportunity to adapt it to the individual circumstances of the operator's network and to exchange information with the applications it serves. This allows the network to be more flexible and to react more quickly to changing circumstances as well as transport the traffic more effectively and tailored to the user’s "Quality of Experience" (QoE).
The approach of a separate network control layer for packet-based networks is not new and has already been proposed several times in the past. Therefore, the SDN approach has raised many questions about its feasibility in terms of efficiency and applicability. These questions are caused to some extent by the fact that there is no generally accepted definition of the SDN concept to date. It is therefore a part of this thesis to derive such a definition. In addition, several of the open issues are investigated. This Investigations follow the three aspects: Performance Evaluation of Software Defined Networking, applications on the SDN control layer, and the usability of SDN Northbound-API for creation application-awareness in network operation.
Performance Evaluation of Software Defined Networking: The question of the efficiency of an SDN-based system was from the beginning one of the most important. In this thesis, experimental measurements of the performance of OpenFlow-enabled switch hardware and control software were conducted for the purpose of answering this question. The results of these measurements were used as input parameters for establishing an analytical model of the reactive SDN approach. Through the model it could be determined that the performance of the software control layer, often called "Controller", is crucial for the overall performance of the system, but that the approach is generally viable. Based on this finding a software for analyzing the performance of SDN controllers was developed. This software allows the emulation of the forwarding layer of an SDN network towards the control software and can thus determine its performance in different situations and configurations. The measurements with this software showed that there are quite significant differences in the behavior of different control software implementations. Among other things it has been shown that some show different characteristics for various switches, in particular in terms of message processing speed. Under certain circumstances this can lead to network failures.
Applications on the SDN control layer: The core piece of software defined networking are the intelligent network applications that operate on the control layer. However, their development is still in its infancy and little is known about the technical possibilities and their limitations. Therefore, the relationship between an SDN-based and classical implementation of a network function is investigated in this thesis. This function is the monitoring of network links and the traffic they carry. A typical approach for this task has been built based on Wiretapping and specialized measurement hardware and compared with an implementation based on OpenFlow switches and a special SDN control application. The results of the comparison show that the SDN version can compete in terms of measurement accuracy for bandwidth and delay estimation with the traditional measurement set-up. However, a compromise has to be found for measurements below the millisecond range.
Another question regarding the SDN control applications is whether and how well they can solve existing problems in networks. Two programs have been developed based on SDN in this thesis to solve two typical network issues. Firstly, the tool "IPOM", which enables considerably more flexibility in the study of effects of network structures for a researcher, who is confined to a fixed physical test network topology.
The second software provides an interface between the Cloud Orchestration Software "OpenNebula" and an OpenFlow controller. The purpose of this software was to investigate experimentally whether a pre-notification of the network of an impending relocation of a virtual service in a data center is sufficient to ensure the continuous operation of that service. This was demonstrated on the example of a video service.
Usability of the SDN Northbound API for creating application-awareness in network operation: Currently, the fact that the network and the applications that run on it are developed and operated separately leads to problems in network operation. SDN offers with the Northbound-API an open interface that enables the exchange between information of both worlds during operation. One aim of this thesis was to investigate whether this interface can be exploited so that the QoE experienced by the user can be maintained on high level. For this purpose, the QoE influence factors were determined on a challenging application by means of a subjective survey study. The application is cloud gaming, in which the calculation of video game environments takes place in the cloud and is transported via video over the network to the user. It was shown that apart from the most important factor influencing QoS, i.e., packet loss on the downlink, also the type of game type and its speed play a role. This demonstrates that in addition to QoS the application state is important and should be communicated to the network. Since an implementation of such a state conscious SDN for the example of Cloud Gaming was not possible due to its proprietary implementation, in this thesis the application “YouTube video streaming” was chosen as an alternative. For this application, status information is retrievable via the "Yomo" tool and can be used for network control. It was shown that an SDN-based implementation of an application-aware network has distinct advantages over traditional network management methods and the user quality can be obtained in spite of disturbances.
Routing is one of the most important issues in any communication network. It defines on which path packets are transmitted from the source of a connection to the destination. It allows to control the distribution of flows between different locations in the network and thereby is a means to influence the load distribution or to reach certain constraints imposed by particular applications. As failures in communication networks appear regularly and cannot be completely avoided, routing is required to be resilient against such outages, i.e., routing still has to be able to forward packets on backup paths even if primary paths are not working any more.
Throughout the years, various routing technologies have been introduced that are very different in their control structure, in their way of working, and in their ability to handle certain failure cases. Each of the different routing approaches opens up their own specific questions regarding configuration, optimization, and inclusion of resilience issues. This monograph investigates, with the example of three particular routing technologies, some concrete issues regarding the analysis and optimization of resilience. It thereby contributes to a better general, technology-independent understanding of these approaches and of their diverse potential for the use in future network architectures.
The first considered routing type, is decentralized intra-domain routing based on administrative IP link costs and the shortest path principle. Typical examples are common today's intra-domain routing protocols OSPF and IS-IS. This type of routing includes automatic restoration abilities in case of failures what makes it in general very robust even in the case of severe network outages including several failed components. Furthermore, special IP-Fast Reroute mechanisms allow for a faster reaction on outages. For routing based on link costs, traffic engineering, e.g. the optimization of the maximum relative link load in the network, can be done indirectly by changing the administrative link costs to adequate values.
The second considered routing type, MPLS-based routing, is based on the a priori configuration of primary and backup paths, so-called Label Switched Paths. The routing layout of MPLS paths offers more freedom compared to IP-based routing as it is not restricted by any shortest path constraints but any paths can be setup. However, this in general involves a higher configuration effort.
Finally, in the third considered routing type, typically centralized routing using a Software Defined Networking (SDN) architecture, simple switches only forward packets according to routing decisions made by centralized controller units. SDN-based routing layouts offer the same freedom as for explicit paths configured using MPLS. In case of a failure, new rules can be setup by the controllers to continue the routing in the reduced topology. However, new resilience issues arise caused by the centralized architecture. If controllers are not reachable anymore, the forwarding rules in the single nodes cannot be adapted anymore. This might render a rerouting in case of connection problems in severe failure scenarios infeasible.
This thesis contributes to several issues in the context of SDN and NFV, with an emphasis on performance and management.
The main contributions are guide lines for operators migrating to software-based networks, as well as an analytical model for the packet processing in a Linux system using the Kernel NAPI.
In this thesis we study various aspects of chaos synchronization of time-delayed coupled chaotic maps. A network of identical nonlinear units interacting by time-delayed couplings can synchronize to a common chaotic trajectory. Even for large delay times the system can completely synchronize without any time shift. In the first part we study chaotic systems with multiple time delays that range over several orders of magnitude. We show that these time scales emerge in the Lyapunov spectrum: Different parts of the spectrum scale with the different delays. We define various types of chaos depending on the scaling of the maximum exponent. The type of chaos determines the synchronization ability of coupled networks. This is, in particular, relevant for the synchronization properties of networks of networks where time delays within a subnetwork are shorter than the corresponding time delays between the different subnetworks. If the maximum Lyapunov exponent scales with the short intra-network delay, only the elements within a subnetwork can synchronize. If, however, the maximum Lyapunov exponent scales with the long inter-network connection, complete synchronization of all elements is possible. The results are illustrated analytically for Bernoulli maps and numerically for tent maps. In the second part the attractor dimension at the transition to complete chaos synchronization is investigated. In particular, we determine the Kaplan-Yorke dimension from the spectrum of Lyapunov exponents for iterated maps. We argue that the Kaplan-Yorke dimension must be discontinuous at the transition and compare it to the correlation dimension. For a system of Bernoulli maps we indeed find a jump in the correlation dimension. The magnitude of the discontinuity in the Kaplan-Yorke dimension is calculated for networks of Bernoulli units as a function of the network size. Furthermore the scaling of the Kaplan-Yorke dimension as well as of the Kolmogorov entropy with system size and time delay is investigated. Finally, we study the change in the attractor dimension for systems with parameter mismatch. In the third and last part the linear response of synchronized chaotic systems to small external perturbations is studied. The distribution of the distances from the synchronization manifold, i.e., the deviations between two synchronized chaotic units due to external perturbations on the transmitted signal, is used as a measure of the linear response. It is calculated numerically and, for some special cases, analytically. Depending on the model parameters this distribution has power law tails in the region of synchronization leading to diverging moments. The linear response is also quantified by means of the bit error rate of a transmitted binary message which perturbs the synchronized system. The bit error rate is given by an integral over the distribution of distances and is studied numerically for Bernoulli, tent and logistic maps. It displays a complex nonmonotonic behavior in the region of synchronization. For special cases the distribution of distances has a fractal structure leading to a devil's staircase for the bit error rate as a function of coupling strength. The response to small harmonic perturbations shows resonances related to coupling and feedback delay times. A bi-directionally coupled chain of three units can completely filter out the perturbation. Thus the second moment and the bit error rate become zero.
Die vorliegende Arbeit beschäftigt sich mit der Chaossynchronisation in Netzwerken mit zeitverzögerten Kopplungen. Ein Netzwerk chaotischer Einheiten kann isochron und vollständig synchronisieren, auch wenn der Austausch der Signale einer oder mehreren Verzögerungszeiten unterliegt. In einem Netzwerk identischer Einheiten hat sich als Stabilitätsanalyse die Methode der Master Stability Funktion von Pecora und Carroll etabliert. Diese entspricht für ein Netzwerk gekoppelter iterativer Bernoulli-Abbildungen Polynomen vom Grade der größten Verzögerungszeit. Das Stabilitätsproblem reduziert sich somit auf die Untersuchung der Nullstellen dieser Polynome hinsichtlich ihrer Lage bezüglich des Einheitskreises. Eine solche Untersuchung kann beispielsweise numerisch mit dem Schur-Cohn-Theorem erfolgen, doch auch analytische Ergebnisse lassen sich erzielen. In der vorliegenden Arbeit werden Bernoulli-Netzwerke mit einer oder mehreren zeitverzögerten Kopplungen und/oder Rückkopplungen untersucht. Hierbei werden Aussagen über Teile des Stabilitätsgebietes getroffen, welche unabhängig von den Verzögerungszeiten sind. Des Weiteren werden Aussagen zu Systemen gemacht, welche sehr große Verzögerungszeiten aufweisen. Insbesondere wird gezeigt, dass in einem Bernoulli-Netzwerk keine stabile Chaossynchronisation möglich ist, wenn die vorhandene Verzögerungszeit sehr viel größer ist als die Zeitskala der lokalen Dynamik, bzw. der Lyapunovzeit. Außerdem wird in bestimmten Systemen mit mehreren Verzögerungszeiten anhand von Symmetriebetrachtungen stabile Chaossynchronisation ausgeschlossen, wenn die Verzögerungszeiten in bestimmten Verhältnissen zueinander stehen. So ist in einem doppelt bidirektional gekoppeltem Paar ohne Rückkopplung und mit zwei verschiedenen Verzögerungszeiten stabile Chaossynchronisation nicht möglich, wenn die Verzögerungszeiten in einem Verhältnis von teilerfremden ungeraden ganzen Zahlen zueinander stehen. Es kann zudem Chaossynchronisation ausgeschlossen werden, wenn in einem bipartiten Netzwerk mit zwei großen Verzögerungszeiten zwischen diesen eine kleine Differenz herrscht. Schließlich wird ein selbstkonsistentes Argument vorgestellt, das das Auftreten von Chaossynchronisation durch die Mischung der Signale der einzelnen Einheiten interpretiert und sich unter anderem auf die Teilerfremdheit der Zyklen eines Netzes stützt. Abschließend wird untersucht, ob einige der durch die Bernoulli-Netzwerke gefundenen Ergebnisse sich auf andere chaotische Netzwerke übertragen lassen. Hervorzuheben ist die sehr gute Übereinstimmung der Ergebnisse eines Bernoulli-Netzwerkes mit den Ergebnissen eines gleichartigen Netzwerkes gekoppelter Halbleiterlasergleichungen, sowie die Übereinstimmungen mit experimentellen Ergebnissen eines Systems von Halbleiterlasern.
Ziele: Evaluierung der Versorgungslage von Patienten mit akutem ST-Hebungsinfarkt im 2007 neu gegründeten Herzinfarktnetz Mainfranken und Vergleich von Ist-Zustand und Leitlinienempfehlungen. Analyse der Behandlungszeiten und Identifizierung von Verbesserungsmöglichkeiten im Netzwerk. Darüber hinaus sollte untersucht werden, ob Feedbackveranstaltungen als Qualitätsmanagement-Intervention die Behandlungszeiten im Laufe des Untersuchungszeitraumes verbessern. Methoden: Von Oktober 2007 bis Dezember 2008 wurden verschiedene Basisdaten sowie die Daten der Rettungs- und Therapiekette von Patienten mit akutem ST-Hebungsinfarkt (Symptomdauer <12h), die in der Medizinischen Klinik und Poliklinik I des Universitätsklinikums Würzburg mit dem Ziel einer PCI akut-koronarangiographiert wurden, im Rahmen der multizentrischen FiTT-STEMI-Studie prospektiv erfasst. Im Untersuchungszeitraum wurden die analysierten Daten alle drei Monate im Rahmen einer Feedbackveranstaltung allen Beteiligten der Rettungs- und Therapiekette demonstriert. Ergebnisse: Im genannten Zeitraum konnten 188 Patienten in die Studie eingeschlossen werden (19% weiblich, 81% männlich), wovon 85% eine PCI im Anschluss an die Koronarangiographie erhielten. Das mittlere Alter betrug 62±12 Jahre, 15% der Patienten waren über 75 Jahre. Der mittlere TIMI-Risk-Score lag bei 3,7 Punkten. Die intrahospitale Letalität lag bei 6,9%. Die Prähospitalzeit betrug im Median 120min; es ergab sich keine signifikante Veränderung über die Quartale. Ein Sekundärtransport bzw. ein prähospitaler Kontakt zum Hausarzt verlängerten die Prähospitalzeit im Median um 173 bzw. 57min. Die Door-to-balloon(D2B)-Zeit betrug im Gesamtuntersuchungszeitraum im Median 76min, nur 33% der Patienten erreichten eine leitliniengerechte D2B-Zeit von <60min. Die meiste Zeit innerhalb der D2B-Zeit entfiel auf die Zeit vom Erreichen der PCI-Klinik bis zum Herzkatheterlabor (Door-to-cath-Zeit). Die Verkürzung der D2B-Zeit von 80min im ersten auf 70min im fünften Quartal war statistisch nicht signifikant. Die Contact-to-balloon(C2B)-Zeit betrug im Gesamtuntersuchungszeitraum im Median 139min und konnte innerhalb des Untersuchungszeitraums statistisch signifikant von 164min im ersten auf 112min im fünften Quartal gesenkt werden. Dadurch konnte die Anzahl der leitliniengerecht behandelten Patienten (C2B-Zeit<120min) von 15 auf 58% im Gesamtkollektiv bzw. 24 auf 63% bei Patienten mit Primärtransport erhöht werden. Schlussfolgerung: Das Patientenkollektiv des Herzinfarktnetzes Mainfranken entsprach bezüglich der Basischarakteristika dem anderer nationaler und internationaler Register. Da eine PCI innerhalb von 120min nach medizinischem Erstkontakt als bestmögliche Therapie beim ST-Hebungsinfarkt angesehen wird und trotz der Verbesserung im Untersuchungszeitraum im fünften Quartal nur 58% der Patienten eine PCI in diesem Zeitintervall erhielten, sollten alle Anstrengungen unternommen werden die D2B- und C2B-Zeiten im Herzinfarktnetz weiter zu verkürzen. Hierfür sollte eine Direktübergabe im Herzkatheterlabor ermöglicht werden, da die Door-to-cath-Zeit in Würzburg im Median 36 bis 48min in Anspruch nahm. Darüber hinaus sollte durch Aufklärungs- und Informationsarbeit sowie Schulungen für Rettungspersonal und Patienten versucht werden einen Sekundärtransport oder Hausarztkontakt sowie ein verzögertes Alarmieren des Rettungsdienstes zu vermeiden, da sich hierdurch die Prähospitalzeit massiv verlängerte. Inwieweit die im Untersuchungszeitraum gezeigte Verkürzung der Zeiten mit den durchgeführten Feedbackveranstaltungen zusammenhängt bleibt ungewiss, da die Veränderung auch durch die Etablierung des neu gegründeten Netzwerks an sich bedingt sein kann.
This work is subdivided into two main areas: resilient admission control and resilient routing. The work gives an overview of the state of the art of quality of service mechanisms in communication networks and proposes a categorization of admission control (AC) methods. These approaches are investigated regarding performance, more precisely, regarding the potential resource utilization by dimensioning the capacity for a network with a given topology, traffic matrix, and a required flow blocking probability. In case of a failure, the affected traffic is rerouted over backup paths which increases the traffic rate on the respective links. To guarantee the effectiveness of admission control also in failure scenarios, the increased traffic rate must be taken into account for capacity dimensioning and leads to resilient AC. Capacity dimensioning is not feasible for existing networks with already given link capacities. For the application of resilient NAC in this case, the size of distributed AC budgets must be adapted according to the traffic matrix in such a way that the maximum blocking probability for all flows is minimized and that the capacity of all links is not exceeded by the admissible traffic rate in any failure scenario. Several algorithms for the solution of that problem are presented and compared regarding their efficiency and fairness. A prototype for resilient AC was implemented in the laboratories of Siemens AG in Munich within the scope of the project KING. Resilience requires additional capacity on the backup paths for failure scenarios. The amount of this backup capacity depends on the routing and can be minimized by routing optimization. New protection switching mechanisms are presented that deviate the traffic quickly around outage locations. They are simple and can be implemented, e.g, by MPLS technology. The Self-Protecting Multi-Path (SPM) is a multi-path consisting of disjoint partial paths. The traffic is distributed over all faultless partial paths according to an optimized load balancing function both in the working case and in failure scenarios. Performance studies show that the network topology and the traffic matrix also influence the amount of required backup capacity significantly. The example of the COST-239 network illustrates that conventional shortest path routing may need 50% more capacity than the optimized SPM if all single link and node failures are protected.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
This dissertation focuses on the performance evaluation of all components of Software Defined Networking (SDN) networks and covers whole their architecture. First, the isolation between virtual networks sharing the same physical resources is investigated with SDN switches of several vendors. Then, influence factors on the isolation are identified and evaluated. Second, the impact of control mechanisms on the performance of the data plane is examined through the flow rule installation time of SDN switches with different controllers. It is shown that both hardware-specific and controller instance have a specific influence on the installation time. Finally, several traffic flow monitoring methods of an SDN controller are investigated and a new monitoring approach is developed and evaluated. It is confirmed that the proposed method allows monitoring of particular flows as well as consumes fewer resources than the standard approach. Based on findings in this thesis, on the one hand, controller developers can refer to the work related to the control plane, such as flow monitoring or flow rule installation, to improve the performance of their applications. On the other hand, network administrators can apply the presented methods to select a suitable combination of controller and switches in their SDN networks, based on their performance requirements