000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Has Fulltext
- yes (7)
Year of publication
- 2016 (7) (remove)
Document Type
- Doctoral Thesis (4)
- Journal article (2)
- Working Paper (1)
Keywords
- Leistungsbewertung (2)
- Augmented Reality (1)
- Cloud Computing (1)
- Cloud Gaming (1)
- Cloud computing (1)
- Computer Games (1)
- Computerspiel (1)
- Cost-benefit analysis (1)
- Couch tracking (1)
- Crowd sourcing (1)
Institute
Sonstige beteiligte Institutionen
In recent years, cloud gaming has become a popular research topic and has claimed many benefits in the commercial domain over conventional gaming. While, cloud gaming platforms have frequently failed in the past, they have received a new impetus over the last years that brought it to the edge of commercial breakthrough. The fragility of the cloud gaming market may be caused by the high investment costs, offered pricing models or competition from existing "à la carte" platforms. This paper aims at investigating the costs and benefits of both platform types through a twofold approach. We first take on the perspective of the customers, and investigate several cloud gaming platforms and their pricing models in comparison to the costs of other gaming platforms. Then, we explore engagement metrics in order to assess the enjoyment of playing the offered games. Lastly, coming from the perspective of the service providers, we aim to identify challenges in cost-effectively operating a large-scale cloud gaming service while maintaining high QoE values. Our analysis provides initial, yet still comprehensive reasons and models for the prospects of cloud gaming in a highly competitive market.
Purpose: A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion. Methods and materials: Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for gamma-tests recorded. Results: For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2 mm gamma-fail rate of 1.6% with adaptation and 15.2% without adaptation (p < 0.001). For all prostate the mean 2%/2 mm gamma-fail rate was 1.4% with adaptation and 17.3% without adaptation (p < 0.001). The difference between the four systems was small with an average 2%/2 mm gamma-fail rate of <3% for all systems with adaptation for lung and prostate. Conclusions: The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods.
Im Rahmen des BMBF-geförderten Projekts KALLIMACHOS an der Universität Würzburg soll unter anderem die Textgrundlage für digitale Editionen per OCR gewonnen werden. Das Bearbeitungskorpus besteht aus deutschen, französischen und lateinischen Inkunabeln. Dieser Artikel zeigt, wie man mit bereits heute existierenden Methoden und Programmen den Problemen bei der OCR von Inkunabeln entgegentreten kann. Hierzu wurde an der Universitätsbibliothek Würzburg ein Verfahren erprobt, mit dem auf ausgewählten Werken einer Druckerwerkstatt bereits Zeichengenauigkeiten von bis zu 95 Prozent und Wortgenauigkeiten von bis zu 73 Prozent erzielt werden.
Im Rahmen dieser Arbeit werden die Nebenläufigkeit, Konsistenz und Latenz in asynchronen
Interaktiven Echtzeitsystemen durch die Techniken des Profilings und des Model
Checkings untersucht. Zu Beginn wird erläutert, warum das asynchrone Modell das vielversprechendste
für die Nebenläufigkeit in einem Interaktiven Echtzeitsystem ist. Hierzu
wird ein Vergleich zu anderen Modellen gezogen. Darüber hinaus wird ein detaillierter
Vergleich von Synchronisationstechnologien, welche die Grundlage für Konsistenz
schaffen, durchgeführt. Auf der Grundlage dieser beiden Vergleiche und der Betrachtung
anderer Systeme wird ein Synchronisationskonzept entwickelt.
Auf dieser Basis wird die Nebenläufigkeit, Konsistenz und Latenz mit zwei Verfahren
untersucht. Die erste Technik ist das Profiling, wobei einige neue Darstellungsformen von
gemessenen Daten entwickelt werden. Diese neu entwickelten Darstellungsformen werden
in der Implementierung eines Profilers verwendet. Als zweite Technik wird das Model
Checking analysiert, welches bisher noch nicht im Kontext von Interaktiven Echtzeitsystemen
verwendet wurde. Model Checking dient dazu, die Verhaltensweise eines Interaktiven
Echtzeitsystems vorherzusagen. Diese Vorhersagen werden mit den Messungen aus
dem Profiler verglichen.
Virtualization allows the creation of virtual instances of physical devices, such as network and processing units. In a virtualized system, governed by a hypervisor, resources are shared among virtual machines (VMs). Virtualization has been receiving increasing interest as away to reduce costs through server consolidation and to enhance the flexibility of physical infrastructures. Although virtualization provides many benefits, it introduces new security challenges; that is, the introduction of a hypervisor introduces threats since hypervisors expose new attack surfaces.
Intrusion detection is a common cyber security mechanism whose task is to detect malicious activities in host and/or network environments. This enables timely reaction in order to stop an on-going attack, or to mitigate the impact of a security breach. The wide adoption of virtualization has resulted in the increasingly common practice of deploying conventional intrusion detection systems (IDSs), for example, hardware IDS appliances or common software-based IDSs, in designated VMs as virtual network functions (VNFs). In addition, the research and industrial communities have developed IDSs specifically designed to operate in virtualized environments (i.e., hypervisorbased IDSs), with components both inside the hypervisor and in a designated VM. The latter are becoming increasingly common with the growing proliferation of virtualized data centers and the adoption of the cloud computing paradigm, for which virtualization is as a key enabling technology.
To minimize the risk of security breaches, methods and techniques for evaluating IDSs in an accurate manner are essential. For instance, one may compare different IDSs in terms of their attack detection accuracy in order to identify and deploy the IDS that operates optimally in a given environment, thereby reducing the risks of a security breach. However, methods and techniques for realistic and accurate evaluation of the attack detection accuracy of IDSs in virtualized environments (i.e., IDSs deployed as VNFs or hypervisor-based IDSs) are lacking. That is, workloads that exercise the sensors of an evaluated IDS and contain attacks targeting hypervisors are needed. Attacks targeting hypervisors are of high severity since they may result in, for example, altering the hypervisors’s memory and thus enabling the execution of malicious code with hypervisor privileges. In addition, there are no metrics and measurement methodologies
for accurately quantifying the attack detection accuracy of IDSs in virtualized environments with elastic resource provisioning (i.e., on-demand allocation or deallocation of virtualized hardware resources to VMs). Modern hypervisors allow for hotplugging virtual CPUs and memory on the designated VM where the intrusion detection engine of hypervisor-based IDSs, as well as of IDSs deployed as VNFs, typically operates. Resource hotplugging may have a significant impact on the attack detection accuracy of an evaluated IDS, which is not taken into account by existing metrics for quantifying IDS attack detection accuracy. This may lead to inaccurate measurements, which, in turn, may result in the deployment of misconfigured or ill-performing IDSs, increasing
the risk of security breaches.
This thesis presents contributions that span the standard components of any system
evaluation scenario: workloads, metrics, and measurement methodologies. The scientific contributions of this thesis are:
A comprehensive systematization of the common practices and the state-of-theart on IDS evaluation. This includes: (i) a definition of an IDS evaluation design space allowing to put existing practical and theoretical work into a common context in a systematic manner; (ii) an overview of common practices in IDS evaluation reviewing evaluation approaches and methods related to each part of the design space; (iii) and a set of case studies demonstrating how different IDS evaluation approaches are applied in practice. Given the significant amount of existing practical and theoretical work related to IDS evaluation, the presented systematization is beneficial for improving the general understanding of the topic by providing an overview of the current state of the field. In addition, it is beneficial for identifying and contrasting advantages and disadvantages of different IDS evaluation methods and practices, while also helping to identify specific requirements and best practices for evaluating current and future IDSs.
An in-depth analysis of common vulnerabilities of modern hypervisors as well as a set of attack models capturing the activities of attackers triggering these vulnerabilities. The analysis includes 35 representative vulnerabilities of hypercall handlers (i.e., hypercall vulnerabilities). Hypercalls are software traps from a kernel of a VM to the hypervisor. The hypercall interface of hypervisors, among device drivers and VM exit events, is one of the attack surfaces that hypervisors expose. Triggering a hypercall vulnerability may lead to a crash of the hypervisor or to altering the hypervisor’s memory. We analyze the origins
of the considered hypercall vulnerabilities, demonstrate and analyze possible attacks that trigger them (i.e., hypercall attacks), develop hypercall attack models(i.e., systematized activities of attackers targeting the hypercall interface), and discuss future research directions focusing on approaches for securing hypercall interfaces.
A novel approach for evaluating IDSs enabling the generation of workloads that contain attacks targeting hypervisors, that is, hypercall attacks. We propose an approach for evaluating IDSs using attack injection (i.e., controlled execution of attacks during regular operation of the environment where an IDS under test is deployed). The injection of attacks is performed based on attack models that capture realistic attack scenarios. We use the hypercall attack models developed as part of this thesis for injecting hypercall attacks.
A novel metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning. We demonstrate how the elasticity of resource allocations in such environments may impact the IDS attack detection accuracy and show that using existing metrics in such environments may lead to practically challenging and inaccurate measurements. We also demonstrate the practical use of the metric we propose through a set of case studies, where we evaluate common conventional IDSs deployed as VNFs.
In summary, this thesis presents the first systematization of the state-of-the-art on IDS evaluation, considering workloads, metrics and measurement methodologies as integral parts of every IDS evaluation approach. In addition, we are the first to examine the hypercall attack surface of hypervisors in detail and to propose an approach using attack injection for evaluating IDSs in virtualized environments. Finally, this thesis presents the first metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning.
From a technical perspective, as part of the proposed approach for evaluating IDSsthis thesis presents hInjector, a tool for injecting hypercall attacks. We designed hInjector to enable the rigorous, representative, and practically feasible evaluation of IDSs using attack injection. We demonstrate the application and practical usefulness of hInjector, as well as of the proposed approach, by evaluating a representative hypervisor-based IDS designed to detect hypercall attacks. While we focus on evaluating the capabilities of IDSs to detect hypercall attacks, the proposed IDS evaluation approach can be generalized and applied in a broader context. For example, it may be directly used to also evaluate security mechanisms of hypervisors, such as hypercall access control (AC) mechanisms. It may also be applied to evaluate the capabilities
of IDSs to detect attacks involving operations that are functionally similar to hypercalls,
for example, the input/output control (ioctl) calls that the Kernel-based Virtual Machine (KVM) hypervisor supports. For IDSs in virtualized environments featuring elastic resource provisioning, our approach for injecting hypercall attacks can be applied in combination with the attack detection accuracy metric and measurement methodology we propose. Our approach for injecting hypercall attacks, and our metric and measurement methodology, can also be applied independently beyond the scenarios considered in this thesis. The wide spectrum of security mechanisms in virtualized environments whose evaluation can directly benefit from the contributions of this thesis (e.g., hypervisor-based IDSs, IDSs deployed as VNFs, and AC mechanisms) reflects the practical implication of the thesis.
Computer systems have replaced human work-force in many parts of everyday life, but there still exists a large number of tasks that cannot be automated, yet. This also includes tasks, which we consider to be rather simple like the categorization of image content or subjective ratings. Traditionally, these tasks have been completed by designated employees or outsourced to specialized companies. However, recently the crowdsourcing paradigm is more and more applied to complete such human-labor intensive tasks. Crowdsourcing aims at leveraging the huge number of Internet users all around the globe, which form a potentially highly available, low-cost, and easy accessible work-force.
To enable the distribution of work on a global scale, new web-based services emerged, so called crowdsourcing platforms, that act as mediator between employers posting tasks and workers completing tasks. However, the crowdsourcing approach, especially the large anonymous worker crowd, results in two types of challenges. On the one hand, there are technical challenges like the dimensioning of crowdsourcing platform infrastructure or the interconnection of crowdsourcing platforms and machine clouds to build hybrid services. On the other hand, there are conceptual challenges like identifying reliable workers or migrating traditional off-line work to the crowdsourcing environment. To tackle these challenges, this monograph analyzes and models current crowdsourcing systems to optimize crowdsourcing workflows and the underlying infrastructure. First, a categorization of crowdsourcing tasks and platforms is developed to derive generalizable properties. Based on this categorization and an exemplary analysis of a commercial crowdsourcing platform, models for different aspects of crowdsourcing platforms and crowdsourcing mechanisms are developed. A special focus is put on quality assurance mechanisms for crowdsourcing tasks, where the models are used to assess the suitability and costs of existing approaches for different types of tasks. Further, a novel quality assurance mechanism solely based on user-interactions is proposed and its feasibility is shown. The findings from the analysis of existing platforms, the derived models, and the developed quality assurance mechanisms are finally used to derive best practices for two crowdsourcing use-cases, crowdsourcing-based network measurements and crowdsourcing-based subjective user studies. These two exemplary use-cases cover aspects typical for a large range of crowdsourcing tasks and illustrated the potential benefits, but also resulting challenges when using crowdsourcing.
With the ongoing digitalization and globalization of the labor markets, the crowdsourcing paradigm is expected to gain even more importance in the next years. This is already evident in the currently new emerging fields of crowdsourcing, like enterprise crowdsourcing or mobile crowdsourcing. The models developed in the monograph enable platform providers to optimize their current systems and employers to optimize their workflows to increase their commercial success. Moreover, the results help to improve the general understanding of crowdsourcing systems, a key for identifying necessary adaptions and future improvements.
Today's Internet is no longer only controlled by a single stakeholder, e.g. a standard body or a telecommunications company.
Rather, the interests of a multitude of stakeholders, e.g. application developers, hardware vendors, cloud operators, and network operators, collide during the development and operation of applications in the Internet.
Each of these stakeholders considers different KPIs to be important and attempts to optimise scenarios in its favour.
This results in different, often opposing views and can cause problems for the complete network ecosystem.
One example of such a scenario are Signalling Storms in the mobile Internet, with one of the largest occurring in Japan in 2012 due to the release and high popularity of a free instant messaging application.
The network traffic generated by the application caused a high number of connections to the Internet being established and terminated.
This resulted in a similarly high number of signalling messages in the mobile network, causing overload and a loss of service for 2.5 million users over 4 hours.
While the network operator suffers the largest impact of this signalling overload, it does not control the application.
Thus, the network operator can not change the application traffic characteristics to generate less network signalling traffic.
The stakeholders who could prevent, or at least reduce, such behaviour, i.e. application developers or hardware vendors, have no direct benefit from modifying their products in such a way.
This results in a clash of interests which negatively impacts the network performance for all participants.
The goal of this monograph is to provide an overview over the complex structures of stakeholder relationships in today's Internet applications in mobile networks.
To this end, we study different scenarios where such interests clash and suggest methods where tradeoffs can be optimised for all participants.
If such an optimisation is not possible or attempts at it might lead to adverse effects, we discuss the reasons.