Institut für Informatik
Refine
Has Fulltext
- yes (378)
Year of publication
Document Type
- Doctoral Thesis (165)
- Journal article (147)
- Working Paper (40)
- Conference Proceeding (11)
- Master Thesis (6)
- Report (5)
- Bachelor Thesis (2)
- Book (1)
- Study Thesis (term paper) (1)
Language
- English (341)
- German (36)
- Multiple languages (1)
Keywords
- Leistungsbewertung (29)
- virtual reality (19)
- Datennetz (14)
- Quality of Experience (12)
- Netzwerk (10)
- Robotik (10)
- machine learning (9)
- Modellierung (8)
- Simulation (8)
- Autonomer Roboter (7)
Institute
- Institut für Informatik (378)
- Institut Mensch - Computer - Medien (12)
- Medizinische Klinik und Poliklinik I (7)
- Graduate School of Science and Technology (6)
- Medizinische Klinik und Poliklinik II (6)
- Institut für Sportwissenschaft (5)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (3)
- Institut für Klinische Epidemiologie und Biometrie (3)
- Theodor-Boveri-Institut für Biowissenschaften (3)
- Institut für Geographie und Geologie (2)
Schriftenreihe
Sonstige beteiligte Institutionen
- Cologne Game Lab (3)
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Open University of the Netherlands (2)
- Siemens AG (2)
- Zentrum für Telematik e.V. (2)
- Airbus Defence and Space GmbH (1)
- Beuth Hochschule für Technik Berlin (1)
- Birmingham City University (1)
- California Institute of Technology (1)
- DLR (1)
Today knowledge base authoring for the engineering of intelligent systems is performed mainly by using tools with graphical user interfaces. An alternative human-computer interaction para- digm is the maintenance and manipulation of electronic documents, which provides several ad- vantages with respect to the social aspects of knowledge acquisition. Until today it hardly has found any attention as a method for knowledge engineering.
This thesis provides a comprehensive discussion of document-centered knowledge acquisition with knowledge markup languages. There, electronic documents are edited by the knowledge authors and the executable knowledge base entities are captured by markup language expressions within the documents. The analysis of this approach reveals significant advantages as well as new challenges when compared to the use of traditional GUI-based tools.
Some advantages of the approach are the low barriers for domain expert participation, the simple integration of informal descriptions, and the possibility of incremental knowledge for- malization. It therefore provides good conditions for building up a knowledge acquisition pro- cess based on the mixed-initiative strategy, being a flexible combination of direct and indirect knowledge acquisition. Further it turns out that document-centered knowledge acquisition with knowledge markup languages provides high potential for creating customized knowledge au- thoring environments, tailored to the needs of the current knowledge engineering project and its participants. The thesis derives a process model to optimally exploit this customization po- tential, evolving a project specific authoring environment by an agile process on the meta level. This meta-engineering process continuously refines the three aspects of the document space: The employed markup languages, the scope of the informal knowledge, and the structuring and organization of the documents. The evolution of the first aspect, the markup languages, plays a key role, implying the design of project specific markup languages that are easily understood by the knowledge authors and that are suitable to capture the required formal knowledge precisely. The goal of the meta-engineering process is to create a knowledge authoring environment, where structure and presentation of the domain knowledge comply well to the users’ mental model of the domain. In that way, the approach can help to ease major issues of knowledge-based system development, such as high initial development costs and long-term maintenance problems.
In practice, the application of the meta-engineering approach for document-centered knowl- edge acquisition poses several technical challenges that need to be coped with by appropriate tool support. In this thesis KnowWE, an extensible document-centered knowledge acquisition environment is presented. The system is designed to support the technical tasks implied by the meta-engineering approach, as for instance design and implementation of new markup lan- guages, content refactoring, and authoring support. It is used to evaluate the approach in several real-world case-studies from different domains, such as medicine or engineering for instance.
We end the thesis by a summary and point out further interesting research questions consid- ering the document-centered knowledge acquisition approach.
This dissertation presents controller design methodologies for a formation of cooperative mobile robots to perform trajectory tracking and convoy protection tasks. Two major problems related to multi-agent formation control are addressed, namely the time-delay and optimality problems. For the task of trajectory tracking, a leader-follower based system structure is adopted for the controller design, where the selection criteria for controller parameters are derived through analyses of characteristic polynomials. The resulting parameters ensure the stability of the system and overcome the steady-state error as well as the oscillation behavior under time-delay effect. In the convoy protection scenario, a decentralized coordination strategy for balanced deployment of mobile robots is first proposed. Based on this coordination scheme, optimal controller parameters are generated in both centralized and decentralized fashion to achieve dynamic convoy protection in a unified framework, where distributed optimization technique is applied in the decentralized strategy. This unified framework takes into account the motion of the target to be protected, and the desired system performance, for instance, minimal energy to spend, equal inter-vehicle distance to keep, etc.
Both trajectory tracking and convoy protection tasks are demonstrated through simulations and real-world hardware experiments based on the robotic equipment at Department of Computer Science VII, University of Würzburg.
Zahlreiche Digitalisierungsprojekte machen das Wissen vergangener Jahrhunderte jederzeit verfügbar. Das volle Potenzial der Digitalisierung von Dokumenten entfaltet sich jedoch erst, wenn diese als durchsuchbare Volltexte verfügbar gemacht werden. Mithilfe von OCR-Software kann die Erfassung weitestgehend automatisiert werden. Fraktur war ab dem 16. Jahrhundert bis zur Mitte des 20. Jahrhunderts die verbreitete Schrift des deutschen Sprachraums. Durch einige Besonderheiten von Fraktur bleiben die Erkennungsraten bei Frakturtexten aber meist deutlich hinter den Erkennungsergebnissen bei Antiquatexten zurück.
Diese Arbeit konzentriert sich auf die Verbesserung der Erkennungsergebnisse der OCR-Software Tesseract bei Frakturtexten. Dazu wurden die Software und bestehende Sprachpakete gesondert auf die Eigenschaften von Fraktur hin analysiert. Durch spezielles Training und Anpassungen an der Software wurde anschließend versucht, die Ergebnisse zu verbessern und Erkenntnisse über die Effektivität verschiedener Ansätze zu gewinnen.
Die Zeichenfehlerraten konnten durch verschiedene Experimente von zuvor 2,5 Prozent auf 1,85 Prozent gesenkt werden. Außerdem werden Werkzeuge vorgestellt, die das Training neuer Schriftarten für Tesseract erleichtern und eine Evaluation der erzielten Verbesserungen ermöglichen.
Object six Degrees of Freedom (6DOF) pose estimation is a fundamental problem in many practical robotic applications, where the target or an obstacle with a simple or complex shape can move fast in cluttered environments. In this thesis, a 6DOF pose estimation algorithm is developed based on the fused data from a time-of-flight camera and a color camera. The algorithm is divided into two stages, an annealed particle filter based coarse pose estimation stage and a gradient decent based accurate pose optimization stage. In the first stage, each particle is evaluated with sparse representation. In this stage, the large inter-frame motion of the target can be well handled. In the second stage, the range data based conventional Iterative Closest Point is extended by incorporating the target appearance information and used for calculating the accurate pose by refining the coarse estimate from the first stage. For dealing with significant illumination variations during the tracking, spherical harmonic illumination modeling is investigated and integrated into both stages. The robustness and accuracy of the proposed algorithm are demonstrated through experiments on various objects in both indoor and outdoor environments. Moreover, real-time performance can be achieved with graphics processing unit acceleration.
Bei Lernprozessen spielt das Anwenden der zu erlernenden Tätigkeit eine wichtige Rolle. Im Kontext der Ausbildung an Schulen und Hochschulen bedeutet dies, dass es wichtig ist, Schülern und Studierenden ausreichend viele Übungsmöglichkeiten anzubieten. Die von Lehrpersonal bei einer "Korrektur" erstellte Rückmeldung, auch Feedback genannt, ist jedoch teuer, da der zeitliche Aufwand je nach Art der Aufgabe beträchtlich ist.
Eine Lösung dieser Problematik stellen E-Learning-Systeme dar. Geeignete Systeme können nicht nur Lernstoff präsentieren, sondern auch Übungsaufgaben anbieten und nach deren Bearbeitung quasi unmittelbar entsprechendes Feedback generieren. Es ist jedoch im Allgemeinen nicht einfach, maschinelle Verfahren zu implementieren, die Bearbeitungen von Übungsaufgaben korrigieren und entsprechendes Feedback erstellen. Für einige Aufgabentypen, wie beispielsweise Multiple-Choice-Aufgaben, ist dies zwar trivial, doch sind diese vor allem dazu gut geeignet, sogenanntes Faktenwissen abzuprüfen. Das Einüben von Lernzielen im Bereich der Anwendung ist damit kaum möglich.
Die Behandlung dieser nach gängigen Taxonomien höheren kognitiven Lernziele erlauben sogenannte offene Aufgabentypen, deren Bearbeitung meist durch die Erstellung eines Freitexts in natürlicher Sprache erfolgt. Die Information bzw. das Wissen, das Lernende eingeben, liegt hier also in sogenannter „unstrukturierter“ Form vor. Dieses unstrukturierte Wissen ist maschinell nur schwer verwertbar, sodass sich Trainingssysteme, die Aufgaben dieser Art stellen und entsprechende Rückmeldung geben, bisher nicht durchgesetzt haben. Es existieren jedoch auch offene Aufgabentypen, bei denen Lernende das Wissen in strukturierter Form eingeben, so dass es maschinell leichter zu verwerten ist. Für Aufgaben dieser Art lassen sich somit Trainingssysteme erstellen, die eine gute Möglichkeit darstellen, Schülern und Studierenden auch für praxisnahe Anwendungen viele Übungsmöglichkeiten zur Verfügung zu stellen, ohne das Lehrpersonal zusätzlich zu belasten.
In dieser Arbeit wird beschrieben, wie bestimmte Eigenschaften von Aufgaben ausgenutzt werden, um entsprechende Trainingssysteme konzipieren und implementieren zu können. Es handelt sich dabei um Aufgaben, deren Lösungen strukturiert und maschinell interpretierbar sind.
Im Hauptteil der Arbeit werden vier Trainingssysteme bzw. deren Komponenten beschrieben und es wird von den Erfahrungen mit deren Einsatz in der Praxis berichtet: Eine Komponente des Trainingssystems „CaseTrain“ kann Feedback zu UML Klassendiagrammen erzeugen. Das neuartige Trainingssystem „WARP“ generiert zu UML Aktivitätsdiagrammen Feedback in mehreren Ebenen, u.a. indem es das durch Aktivitätsdiagramme definierte Verhalten von Robotern in virtuellen Umgebungen visualisiert. Mit „ÜPS“ steht ein Trainingssystem zur Verfügung, mit welchem die Eingabe von SQL-Anfragen eingeübt werden kann. Eine weitere in „CaseTrain“ implementierte Komponente für Bildmarkierungsaufgaben ermöglicht eine unmittelbare, automatische Bewertung entsprechender Aufgaben.
Die Systeme wurden im Zeitraum zwischen 2011 und 2014 an der Universität Würzburg in Vorlesungen mit bis zu 300 Studierenden eingesetzt und evaluiert. Die Evaluierung ergab eine hohe Nutzung und eine gute Bewertung der Studierenden der eingesetzten Konzepte, womit belegt wurde, dass elektronische Trainingssysteme für offene Aufgaben in der Praxis eingesetzt werden können.
In this work, a novel method for estimating the relative pose of a known object is presented, which relies on an application-specific data fusion process. A PMD-sensor in conjunction with a CCD-sensor is used to perform the pose estimation. Furthermore, the work provides a method for extending the measurement range of the PMD sensor along with the necessary calibration methodology. Finally, extensive measurements on a very accurate Rendezvous and Docking testbed are made to evaluate the performance, what includes a detailed discussion of lighting conditions.
This technical report introduces the Descartes Modeling Language (DML), a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. DML is designed to serve as a basis for self-aware resource management during operation ensuring that system QoS requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible.
Die Extraktion von Metadaten aus historischen Dokumenten ist eine zeitintensive, komplexe und höchst fehleranfällige Tätigkeit, die üblicherweise vom menschlichen Experten übernommen werden muss. Sie ist jedoch notwendig, um Bezüge zwischen Dokumenten herzustellen, Suchanfragen zu historischen Ereignissen korrekt zu beantworten oder semantische Verknüpfungen aufzubauen. Um den manuellen Aufwand dieser Aufgabe reduzieren zu können, sollen Verfahren der Named Entity Recognition angewendet werden. Die Klassifikation von Termen in historischen Handschriften stellt jedoch eine große Herausforderung dar, da die Domäne eine hohe Schreibweisenvarianz durch unter anderem nur konventionell vereinbarte Orthographie mit sich bringt. Diese Arbeit stellt Verfahren vor, die auch in komplexen syntaktischen Umgebungen arbeiten können, indem sie auf Informationen aus dem Kontext der zu klassifizierenden Terme zurückgreifen und diese mit domänenspezifischen Heuristiken kombinieren. Weiterhin wird evaluiert, wie die so gewonnenen Metadaten genutzt werden können, um in Workflow-Systemen zur Digitalisierung historischer Handschriften Mehrwerte durch Heuristiken zur Produktionsfehlererkennung zu erzielen.
In many cases, problems, data, or information can be modeled as graphs. Graphs can be used as a tool for modeling in any case where connections between distinguishable objects occur. Any graph consists of a set of objects, called vertices, and a set of connections, called edges, such that any edge connects a pair of vertices. For example, a social network can be modeled by a graph by
transforming the users of the network into vertices and friendship relations between users into edges. Also physical networks like computer networks or transportation networks, for example, the metro network of a city, can be seen as graphs.
For making graphs and, thereby, the data that is modeled, well-understandable for users, we need a visualization. Graph drawing deals with algorithms for visualizing graphs. In this thesis, especially the use of crossings and curves is investigated for graph drawing problems under additional constraints. The constraints that occur in the problems investigated in this thesis especially restrict the positions of (a part of) the vertices; this is done either as a hard constraint or as an optimization criterion.
Routing is one of the most important issues in any communication network. It defines on which path packets are transmitted from the source of a connection to the destination. It allows to control the distribution of flows between different locations in the network and thereby is a means to influence the load distribution or to reach certain constraints imposed by particular applications. As failures in communication networks appear regularly and cannot be completely avoided, routing is required to be resilient against such outages, i.e., routing still has to be able to forward packets on backup paths even if primary paths are not working any more.
Throughout the years, various routing technologies have been introduced that are very different in their control structure, in their way of working, and in their ability to handle certain failure cases. Each of the different routing approaches opens up their own specific questions regarding configuration, optimization, and inclusion of resilience issues. This monograph investigates, with the example of three particular routing technologies, some concrete issues regarding the analysis and optimization of resilience. It thereby contributes to a better general, technology-independent understanding of these approaches and of their diverse potential for the use in future network architectures.
The first considered routing type, is decentralized intra-domain routing based on administrative IP link costs and the shortest path principle. Typical examples are common today's intra-domain routing protocols OSPF and IS-IS. This type of routing includes automatic restoration abilities in case of failures what makes it in general very robust even in the case of severe network outages including several failed components. Furthermore, special IP-Fast Reroute mechanisms allow for a faster reaction on outages. For routing based on link costs, traffic engineering, e.g. the optimization of the maximum relative link load in the network, can be done indirectly by changing the administrative link costs to adequate values.
The second considered routing type, MPLS-based routing, is based on the a priori configuration of primary and backup paths, so-called Label Switched Paths. The routing layout of MPLS paths offers more freedom compared to IP-based routing as it is not restricted by any shortest path constraints but any paths can be setup. However, this in general involves a higher configuration effort.
Finally, in the third considered routing type, typically centralized routing using a Software Defined Networking (SDN) architecture, simple switches only forward packets according to routing decisions made by centralized controller units. SDN-based routing layouts offer the same freedom as for explicit paths configured using MPLS. In case of a failure, new rules can be setup by the controllers to continue the routing in the reduced topology. However, new resilience issues arise caused by the centralized architecture. If controllers are not reachable anymore, the forwarding rules in the single nodes cannot be adapted anymore. This might render a rerouting in case of connection problems in severe failure scenarios infeasible.
With the introduction of OpenFlow by the Stanford University in 2008, a process began in the area of network research, which questions the predominant approach of fully distributed network control. OpenFlow is a communication protocol that allows the externalization of the network control plane from the network devices, such as a router, and to realize it as a logically-centralized entity in software. For this concept, the term "Software Defined Networking" (SDN) was coined during scientific discourse.
For the network operators, this concept has several advantages. The two most important can be summarized under the points cost savings and flexibility. Firstly, it is possible through the uniform interface for network hardware ("Southbound API"), as implemented by OpenFlow, to combine devices and software from different manufacturers, which increases the innovation and price pressure on them. Secondly, the realization of the network control plane as a freely programmable software with open interfaces ("Northbound API") provides the opportunity to adapt it to the individual circumstances of the operator's network and to exchange information with the applications it serves. This allows the network to be more flexible and to react more quickly to changing circumstances as well as transport the traffic more effectively and tailored to the user’s "Quality of Experience" (QoE).
The approach of a separate network control layer for packet-based networks is not new and has already been proposed several times in the past. Therefore, the SDN approach has raised many questions about its feasibility in terms of efficiency and applicability. These questions are caused to some extent by the fact that there is no generally accepted definition of the SDN concept to date. It is therefore a part of this thesis to derive such a definition. In addition, several of the open issues are investigated. This Investigations follow the three aspects: Performance Evaluation of Software Defined Networking, applications on the SDN control layer, and the usability of SDN Northbound-API for creation application-awareness in network operation.
Performance Evaluation of Software Defined Networking: The question of the efficiency of an SDN-based system was from the beginning one of the most important. In this thesis, experimental measurements of the performance of OpenFlow-enabled switch hardware and control software were conducted for the purpose of answering this question. The results of these measurements were used as input parameters for establishing an analytical model of the reactive SDN approach. Through the model it could be determined that the performance of the software control layer, often called "Controller", is crucial for the overall performance of the system, but that the approach is generally viable. Based on this finding a software for analyzing the performance of SDN controllers was developed. This software allows the emulation of the forwarding layer of an SDN network towards the control software and can thus determine its performance in different situations and configurations. The measurements with this software showed that there are quite significant differences in the behavior of different control software implementations. Among other things it has been shown that some show different characteristics for various switches, in particular in terms of message processing speed. Under certain circumstances this can lead to network failures.
Applications on the SDN control layer: The core piece of software defined networking are the intelligent network applications that operate on the control layer. However, their development is still in its infancy and little is known about the technical possibilities and their limitations. Therefore, the relationship between an SDN-based and classical implementation of a network function is investigated in this thesis. This function is the monitoring of network links and the traffic they carry. A typical approach for this task has been built based on Wiretapping and specialized measurement hardware and compared with an implementation based on OpenFlow switches and a special SDN control application. The results of the comparison show that the SDN version can compete in terms of measurement accuracy for bandwidth and delay estimation with the traditional measurement set-up. However, a compromise has to be found for measurements below the millisecond range.
Another question regarding the SDN control applications is whether and how well they can solve existing problems in networks. Two programs have been developed based on SDN in this thesis to solve two typical network issues. Firstly, the tool "IPOM", which enables considerably more flexibility in the study of effects of network structures for a researcher, who is confined to a fixed physical test network topology.
The second software provides an interface between the Cloud Orchestration Software "OpenNebula" and an OpenFlow controller. The purpose of this software was to investigate experimentally whether a pre-notification of the network of an impending relocation of a virtual service in a data center is sufficient to ensure the continuous operation of that service. This was demonstrated on the example of a video service.
Usability of the SDN Northbound API for creating application-awareness in network operation: Currently, the fact that the network and the applications that run on it are developed and operated separately leads to problems in network operation. SDN offers with the Northbound-API an open interface that enables the exchange between information of both worlds during operation. One aim of this thesis was to investigate whether this interface can be exploited so that the QoE experienced by the user can be maintained on high level. For this purpose, the QoE influence factors were determined on a challenging application by means of a subjective survey study. The application is cloud gaming, in which the calculation of video game environments takes place in the cloud and is transported via video over the network to the user. It was shown that apart from the most important factor influencing QoS, i.e., packet loss on the downlink, also the type of game type and its speed play a role. This demonstrates that in addition to QoS the application state is important and should be communicated to the network. Since an implementation of such a state conscious SDN for the example of Cloud Gaming was not possible due to its proprietary implementation, in this thesis the application “YouTube video streaming” was chosen as an alternative. For this application, status information is retrievable via the "Yomo" tool and can be used for network control. It was shown that an SDN-based implementation of an application-aware network has distinct advantages over traditional network management methods and the user quality can be obtained in spite of disturbances.
In this paper we study connectivity augmentation problems. Given a connected graph G with some desirable property, we want to make G 2-vertex connected (or 2-edge connected) by adding edges such that the resulting graph keeps the property. The aim is to add as few edges as possible. The property that we consider is planarity, both in an abstract graph-theoretic and in a geometric setting, where vertices correspond to points in the plane and edges to straight-line segments.
We show that it is NP-hard to nd a minimum-cardinality augmentation that makes a planar graph 2-edge connected. For making a planar graph 2-vertex connected this was known. We further show that both problems are hard in the geometric setting, even when restricted to trees. The problems remain hard for higher degrees of connectivity. On the other hand we give polynomial-time algorithms for the special case of convex geometric graphs.
We also study the following related problem. Given a planar (plane geometric) graph G, two vertices s and t of G, and an integer c, how many edges have to be added to G such that G is still planar (plane geometric) and contains c edge- (or vertex-) disjoint s{t paths? For the planar case we give a linear-time algorithm for c = 2. For the plane geometric case we give optimal worst-case bounds for c = 2; for c = 3 we characterize the cases that have a solution.
The present paper compares the effect of different waypoint parameters on the flight performance of a special autonomous indoor UAV (unmanned aerial vehicle) fusing ultrasonic, inertial, pressure and optical sensors for 3D positioning and controlling. The investigated parameters are the acceptance threshold for reaching a waypoint as well as the maximal waypoint step size or block size. The effect of these parameters on the flight time and accuracy of the flight path is investigated. Therefore the paper addresses how the acceptance threshold and step size influence the speed and accuracy of the autonomous flight and thus influence the performance of the presented autonomous quadrocopter under real indoor navigation circumstances.
Furthermore the paper demonstrates a drawback of the standard potential field method for navigation of such autonomous quadrocopters and points to an improvement.
A procedure to control all six DOF (degrees of freedom) of a UAV (unmanned aerial vehicle) without an external reference system and to enable fully autonomous flight is presented here. For 2D positioning the principle of optical flow is used. Together with the output of height estimation, fusing ultrasonic, infrared and inertial and pressure sensor data, the 3D position of the UAV can be computed, controlled and steered. All data processing is done on the UAV. An external computer with a pathway planning interface is for commanding purposes only. The presented system is part of the AQopterI8 project, which aims to develop an autonomous flying quadrocopter for indoor application. The focus of this paper is 2D positioning using an optical flow sensor. As a result of the performed evaluation, it can be concluded that for position hold, the standard deviation of the position error is 10cm and after landing the position error is about 30cm.
Streaming of videos has become the major traffic generator in today's Internet and the video traffic share is still increasing. According to Cisco's annual Visual Networking Index report, in 2012, 60% of the global Internet IP traffic was generated by video streaming services. Furthermore, the study predicts further increase to 73% by 2017. At the same time, advances in the fields of mobile communications and embedded devices lead to a widespread adoption of Internet video enabled mobile and wireless devices (e.g. Smartphones). The report predicts that by 2017, the traffic originating from mobile and wireless devices will exceed the traffic from wired devices and states that mobile video traffic was the source of roughly half of the mobile IP traffic at the end of 2012.
With the increasing importance of Internet video streaming in today's world, video content provider find themselves in a highly competitive market where user expectations are high and customer loyalty depends strongly on the user's satisfaction with the provided service. In particular paying customers expect their viewing experience to be the same across all their viewing devices and independently of their currently utilized Internet access technology. However, providing video streaming services is costly in terms of storage space, required bandwidth and generated traffic. Therefore, content providers face a trade-off between the user perceived Quality of Experience (QoE) and the costs for providing the service.
Today, a variety of transport and application protocols exist for providing video streaming services, but the one utilized depends on the scenario in mind. Video streaming services can be divided up in three categories: Video conferencing, IPTV and Video-on-Demand services. IPTV and video-conferencing have severe real-time constraints and thus utilize mostly datagram-based protocols like the RTP/UDP protocol for the video transmission. Video-on-Demand services in contrast can profit from pre-encoded content, buffers at the end user's device, and mostly utilize TCP-based protocols in combination with progressive streaming for the media delivery.
In recent years, the HTTP protocol on top of the TCP protocol gained widespread popularity as a cost-efficient way to distribute pre-encoded video content to customers via progressive streaming. This is due to the fact that HTTP-based video streaming profits from a well-established infrastructure which was originally implemented to efficiently satisfy the increasing demand for web browsing and file downloads. Large Content Delivery Networks (CDN) are the key components of that distribution infrastructure. CDNs prevent expensive long-haul data traffic and delays by distributing HTTP content to world-wide locations close to the customers. As of 2012, already 53% of the global video traffic in the Internet originates from Content Delivery Networks and that percentage is expected to increase to 65% by the year 2017. Furthermore, HTTP media streaming profits from existing HTTP caching infrastructure, ease of NAT and proxy traversal and firewall friendliness.
Video delivery through heterogeneous wired and wireless communications networks is prone to distortions due to insufficient network resources. This is especially true in wireless scenarios, where user mobility and insufficient signal strength can result in a very poor transport service performance (e.g. high packet loss, delays and low and varying bandwidth). A poor performance of the transport in turn may degrade the Quality of Experience as perceived by the user, either due to buffer underruns (i.e. playback interruptions) for TCP-based delivery or image distortions for datagram-based real-time video delivery.
In order to overcome QoE degradations due to insufficient network resources, content provider have to consider adaptive video streaming. One possibility to implement this for HTTP/TCP streaming is by partitioning the content into small segments, encode the segments into different quality levels and provide access to the segments and the quality level details (e.g. resolution, average bitrate). During the streaming session, a client-centric adaptation algorithm can use the supplied details to adapt the playback to the current environment. However, a lack of a common HTTP adaptive streaming standard led to multiple proprietary solutions developed by major Internet companies like Microsoft (Smooth Streaming), Apple (HTTP Live Streaming) and Adobe (HTTP Dynamic Streaming) loosely based on the aforementioned principle. In 2012, the ISO/IEC published the Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. As of today, DASH is becoming widely accepted with major companies announcing their support or having already implemented the standard into their products. MPEG-DASH is typically used with single layer codecs like H.264/AVC, but recent publications show that scalable video coding can use the existing HTTP infrastructure more efficiently. Furthermore, the layered approach of scalable video coding extends the adaptation options for the client, since already downloaded segments can be enhanced at a later time.
The influence of distortions on the perceived QoE for non-adaptive video streaming are well reviewed and published. For HTTP streaming, the QoE of the user is influenced by the initial delay (i.e. the time the client pre-buffers video data) and the length and frequency of playback interruptions due to a depleted video playback buffer. Studies highlight that even low stalling times and frequencies have a negative impact on the QoE of the user and should therefore be avoided. The first contribution of this thesis is the identification of QoE influence factors of adaptive video streaming by the means of crowd-sourcing and a laboratory study.
MPEG-DASH does not specify how to adapt the playback to the available bandwidth and therefore the design of a download/adaptation algorithm is left to the developer of the client logic. The second contribution of this thesis is the design of a novel user-centric adaption logic for DASH with SVC. Other download algorithms for segmented HTTP streaming with single layer and scalable video coding have been published lately. However, there is little information about the behavior of these algorithms regarding the identified QoE-influence factors. The third contribution is a user-centric performance evaluation of three existing adaptation algorithms and a comparison to the proposed algorithm. In the performance evaluation we also evaluate the fairness of the algorithms. In one fairness scenario, two clients deploy the same adaptation algorithm and share one Internet connection. For a fair adaptation algorithm, we expect the behavior of the two clients to be identical. In a second fairness scenario, one client shares the Internet connection with a large HTTP file download and we expect an even bandwidth distribution between the video streaming and the file download. The forth contribution of this thesis is an evaluation of the behavior of the algorithms in a two-client and HTTP cross traffic scenario.
The remainder of this thesis is structured as follows. Chapter II gives a brief introduction to video coding with H.264, the HTTP adaptive streaming standard MPEG-DASH, the investigated adaptation algorithms and metrics of Quality of Experience (QoE) for video streaming. Chapter III presents the methodology and results of the subjective studies conducted in the course of this thesis to identify the QoE influence factors of adaptive video streaming. In Chapter IV, we introduce the proposed adaptation algorithm and the methodology of the performance evaluation. Chapter V highlights the results of the performance evaluation and compares the investigated adaptation algorithms. Section VI summarizes the main findings and gives an outlook towards QoE-centric management of DASH with SVC.
Today’s Internet architecture was not designed from scratch but was driven by new services that emerged during its development. Hence, it is often described as patchwork where additional patches are applied in case new services require modifications to the existing architecture. This process however is rather slow and hinders the development of innovative network services with certain architecture or network requirements. Currently discussed technologies like Software-Defined Networking (SDN) or Network Virtualization (NV) are seen as key enabling technologies to overcome this rigid best effort legacy of the Internet. Both technologies offer the possibility to create virtual networks that accommodate the specific needs of certain services. These logical networks are operated on top of a physical substrate and facilitate flexible network resource allocation as physical resources can be added and removed depending on the current network and load situation. In addition, the clear separation and isolation of networks foster the development of application-aware networks that fulfill the special requirements of emerging applications. A prominent use case that benefits from these extended capabilities of the network is denoted with service component mobility. Services hosted on Virtual Machines (VMs) follow their consuming mobile endpoints, so that access latency as well as consumed network resources are reduced. Especially for applications like video streaming, which consume a large fraction of the available resources, is this an important means to relieve the resource constraints and eventually provide better service quality. Service and endpoint mobility both allow an adaptation of the used paths between an offered service, i.e., video streaming and the consuming users in case the service quality drops due to network problems. To make evidence-based adaptations in case of quality drops, a scalable monitoring component is required that is able to monitor the service quality for video streaming applications with reliable accuracy. This monograph details challenges that arise when deploying a certain service, i.e., video streaming, in a future virtualized network architecture and discusses possible solutions. In particular, this work evaluates the performance of mechanisms enabling service mobility and presents an optimized architecture for service mobility. Concerning endpoint mobility, improvements are developed that reduce the latency between endpoints and consumed services and ensure connectivity regardless of the used mobile access network. In the last part, a network-based video quality monitoring solution is developed and its accuracy is evaluated.
Radiation therapy today, on account of improvements in treatment procedures over the last 60 years, allows precise treatment of static tumors inside the human body. However, irradiation of moving tumors is still a challenging task as moving tumors often leave the treatment beam and the radiation dose delivered to the tumor reduces simultaneously increasing that on healthy tissue. This research work aims to push the frontiers of radiation therapy in order to enable precise treatment of moving tumors with focus on research and development of a unique real-time system enabling active motion compensation through robotic means to compensate tumor motion. During treatment, patients lie on a treatment couch which is normally used for static position corrections of patient set-up errors prior to radiation treatment. The treatment couch used, called HexaPOD, is a parallel manipulator with six degrees of freedom which can precisely position heavy loads inside a small region. Despite the HexaPOD not initially built with dynamics in mind, it is used in this work for sustained motion compensation by moving patients such that tumors stay precisely located at the center of the treatment beam during the complete course of treatment. In order to realize real-time tumor motion compensation by means of the HexaPOD, several challanges need to be addressed. Real-time aspects are covered by the adoption of a hard real-time operation system in combination with measurement and estimation of latencies of all physical quantities in the compensation system such as tumor or breathing position measurements. Accurate timing information is respected consistently in the whole system and all software-induced latencies are adaptively compensated for. This requires knowledge of future tumor positions from predictors. Several predictors for breathing and tumor motion predictions are proposed and evaluated in terms of a variety of different performance metrics. Extensions to prediction algorithms are introduced fusing both breathing and tumor position information to allow for predictions without the need of an explicit correlation model. Predictions determine the future motion path of the HexaPOD in order to compensate for tumor motion. Several control schemes are developed to enable reference tracking for the HexaPOD. Based on linear and non-linear dynamic modelling of the HexaPOD with system identification methods, a first controller is derived in the form of a model predictive controller. A second controller is proposed based on an assumption of the working principle of the HexaPOD's internal controller. Finally, a third controller is derived as combination of the first and second one. For each of these controllers, comparative results with real hardware experiments and humans in the loop as well as choices of free parameters are presented and discussed. Apart from precise tracking, emphasis is placed on patient comfort which is of crucial importance for acceptance of the system. It is demonstrated that smooth trajectories can be realized by the controllers to guarantee that patients feel comfortable while their tumor motion is compensated at sub-millimeter accuracies. Overall errors of the system are analyzed by relating them to tracking and prediction errors. By exploiting the properties of different predictors, it is shown that the startup time until tracking is reached can be reduced to only a few seconds, even in the case of an initially at-rest HexaPOD and with no initial knowledge of tumor motion. This makes the system especially suitable for the relatively short-fractionated treatment sessions for lung tumors. The tumor motion compensation system has been developed solely based on standard clinical hardware, found in most treatment rooms. With a simple and flexible design, existing treatment can be updated in a cost-efficient way to introduce motion compensation capabilities. Simultaneously, the system does not impose any constraints on state-of-the-art treatment types such as intensity modulated radiotherapy or volumetric modulated arc therapy. Supporting different compensation modes, the system can be applied to any moving tumor whether its motion is predictable (lung tumors) or unpredictable (prostate tumors). By integration of adequate tumor position determination methods, the system can be easily extended to other tumors as well.
Cover contact graphs
(2012)
We study problems that arise in the context of covering certain geometric objects called seeds (e.g., points or disks) by a set of other geometric objects called cover (e.g., a set of disks or homothetic triangles). We insist that the interiors of the seeds and the cover elements are pairwise disjoint, respectively, but they can touch. We call the contact graph of a cover a cover contact graph (CCG). We are interested in three types of tasks, both in the general case and in the special case of seeds on a line: (a) deciding whether a given seed set has a connected CCG, (b) deciding whether a given graph has a realization as a CCG on a given seed set, and (c) bounding the sizes of certain classes of CCG’s. Concerning (a) we give efficient algorithms for the case that seeds are points and show that the problem becomes hard if seeds and covers are disks. Concerning (b) we show that this problem is hard even for point seeds and disk covers (given a fixed correspondence between graph vertices and seeds). Concerning (c) we obtain upper and lower bounds on the number of CCG’s for point seeds.
This work takes a close look at several quite different research areas related to the design of networked embedded sensor/actuator systems. The variety of the topics illustrates the potential complexity of current sensor network applications; especially when enriched with actuators for proactivity and environmental interaction. Besides their conception, development, installation and long-term operation, we'll mainly focus on more "low-level" aspects: Compositional hardware and software design, task cooperation and collaboration, memory management, and real-time operation will be addressed from a local node perspective. In contrast, inter-node synchronization, communication, as well as sensor data acquisition, aggregation, and fusion will be discussed from a rather global network view. The diversity in the concepts was intentionally accepted to finally facilitate the reliable implementation of truly complex systems. In particular, these should go beyond the usual "sense and transmit of sensor data", but show how powerful today's networked sensor/actuator systems can be despite of their low computational performance and constrained hardware: If their resources are only coordinated efficiently!
The work presents a performance evaluation and optimization of so-called overlay networks for content distribution in the Internet. Chapter 1 describes the importance which have such networks in today's Internet, for example, for the transmission of video content. The focus of this work is on overlay networks based on the peer-to-peer principle. These are characterized by the fact that users who download content, also contribute to the distribution process by sharing parts of the data to other users. This enables efficient content distribution because each user not only consumes resources in the system, but also provides its own resources. Chapter 2 of the monograph contains a detailed description of the functionality of today's most popular overlay network BitTorrent. It explains the various components and their interaction. This is followed by an illustration of why such overlay networks for Internet service providers (ISPs) are problematic. The reason lies in the large amount of inter-ISP traffic that is produced by these overlay networks. Since this inter-ISP traffic leads to high costs for ISPs, they try to reduce it by improved mechanisms for overlay networks. One optimization approach is the use of topology awareness within the overlay networks. It provides users of the overlay networks with information about the underlying physical network topology. This allows them to avoid inter-ISP traffic by exchanging data preferrentially with other users that are connected to the same ISP. Another approach to save inter-ISP traffic is caching. In this case the ISP provides additional computers in its network, called caches, which store copies of popular content. The users of this ISP can then obtain such content from the cache. This prevents that the content must be retrieved from locations outside of the ISP's network, and saves costly inter-ISP traffic in this way. In the third chapter of the thesis, the results of a comprehensive measurement study of overlay networks, which can be found in today's Internet, are presented. After a short description of the measurement methodology, the results of the measurements are described. These results contain data on a variety of characteristics of current P2P overlay networks in the Internet. These include the popularity of content, i.e., how many users are interested in specific content, the evolution of the popularity and the size of the files. The distribution of users within the Internet is investigated in detail. Special attention is given to the number of users that exchange a particular file within the same ISP. On the basis of these measurement results, an estimation of the traffic savings that can achieved by topology awareness is derived. This new estimation is of scientific and practical importance, since it is not limited to individual ISPs and files, but considers the whole Internet and the total amount of data exchanged in overlay networks. Finally, the characteristics of regional content are considered, in which the popularity is limited to certain parts of the Internet. This is for example the case of videos in German, Italian or French language. Chapter 4 of the thesis is devoted to the optimization of overlay networks for content distribution through caching. It presents a deterministic flow model that describes the influence of caches. On the basis of this model, it derives an estimate of the inter-ISP traffic that is generated by an overlay network, and which part can be saved by caches. The results show that the influence of the cache depends on the structure of the overlay networks, and that caches can also lead to an increase in inter-ISP traffic under certain circumstances. The described model is thus an important tool for ISPs to decide for which overlay networks caches are useful and to dimension them. Chapter 5 summarizes the content of the work and emphasizes the importance of the findings. In addition, it explains how the findings can be applied to the optimization of future overlay networks. Special attention is given to the growing importance of video-on-demand and real-time video transmissions.