000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Has Fulltext
- yes (131)
Year of publication
Document Type
- Doctoral Thesis (90)
- Journal article (20)
- Working Paper (5)
- Book (3)
- Jahresbericht (2)
- Bachelor Thesis (2)
- Conference Proceeding (2)
- Master Thesis (2)
- Report (2)
- Book article / Book chapter (1)
Keywords
- Leistungsbewertung (14)
- Quality of Experience (9)
- Cloud Computing (7)
- Maschinelles Lernen (6)
- Data Mining (5)
- Netzwerk (5)
- Mensch-Maschine-Kommunikation (4)
- Modellierung (4)
- Simulation (4)
- Telekommunikationsnetz (4)
Institute
- Institut für Informatik (98)
- Betriebswirtschaftliches Institut (9)
- Graduate School of Science and Technology (7)
- Graduate School of Life Sciences (4)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Institut für Molekulare Infektionsbiologie (3)
- Universitätsbibliothek (3)
- Institut Mensch - Computer - Medien (2)
- Universität Würzburg (2)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (1)
Sonstige beteiligte Institutionen
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Siemens AG (2)
- Technische Hochschule Nürnberg Georg Simon Ohm (2)
- Beuth Hochschule für Technik Berlin (1)
- Hochschule Wismar (1)
- University of Applied Sciences and Arts Western Switzerland, Fribourg (1)
- University of Duisburg-Essen (1)
- Zentrum für Telematik e.V. (1)
EU-Project number / Contract (GA) number
- 320377 (1)
Introduction.
Mobile health (mHealth) integrates mobile devices into healthcare, enabling remote monitoring, data collection, and personalized interventions. Machine Learning (ML), a subfield of Artificial Intelligence (AI), can use mHealth data to confirm or extend domain knowledge by finding associations within the data, i.e., with the goal of improving healthcare decisions. In this work, two data collection techniques were used for mHealth data fed into ML systems: Mobile Crowdsensing (MCS), which is a collaborative data gathering approach, and Ecological Momentary Assessments (EMA), which capture real-time individual experiences within the individual’s common environments using questionnaires and sensors. We collected EMA and MCS data on tinnitus and COVID-19. About 15 % of the world’s population suffers from tinnitus.
Materials & Methods.
This thesis investigates the challenges of ML systems when using MCS and EMA data. It asks: How can ML confirm or broad domain knowledge? Domain knowledge refers to expertise and understanding in a specific field, gained through experience and education. Are ML systems always superior to simple heuristics and if yes, how can one reach explainable AI (XAI) in the presence of mHealth data? An XAI method enables a human to understand why a model makes certain predictions. Finally, which guidelines can be beneficial for the use of ML within the mHealth domain? In tinnitus research, ML discerns gender, temperature, and season-related variations among patients. In the realm of COVID-19, we collaboratively designed a COVID-19 check app for public education, incorporating EMA data to offer informative feedback on COVID-19-related matters. This thesis uses seven EMA datasets with more than 250,000 assessments. Our analyses revealed a set of challenges: App user over-representation, time gaps, identity ambiguity, and operating system specific rounding errors, among others. Our systematic review of 450 medical studies assessed prior utilization of XAI methods.
Results.
ML models predict gender and tinnitus perception, validating gender-linked tinnitus disparities. Using season and temperature to predict tinnitus shows the association of these variables with tinnitus. Multiple assessments of one app user can constitute a group. Neglecting these groups in data sets leads to model overfitting. In select instances, heuristics outperform ML models, highlighting the need for domain expert consultation to unveil hidden groups or find simple heuristics.
Conclusion.
This thesis suggests guidelines for mHealth related data analyses and improves estimates for ML performance. Close communication with medical domain experts to identify latent user subsets and incremental benefits of ML is essential.
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
Deep Learning (DL) models are trained on a downstream task by feeding (potentially preprocessed) input data through a trainable Neural Network (NN) and updating its parameters to minimize the loss function between the predicted and the desired output. While this general framework has mainly remained unchanged over the years, the architectures of the trainable models have greatly evolved. Even though it is undoubtedly important to choose the right architecture, we argue that it is also beneficial to develop methods that address other components of the training process. We hypothesize that utilizing domain knowledge can be helpful to improve DL models in terms of performance and/or efficiency. Such model-agnostic methods can be applied to any existing or future architecture. Furthermore, the black box nature of DL models motivates the development of techniques to understand their inner workings. Considering the rapid advancement of DL architectures, it is again crucial to develop model-agnostic methods.
In this thesis, we explore six principles that incorporate domain knowledge to understand or improve models. They are applied either on the input or output side of the trainable model. Each principle is applied to at least two DL tasks, leading to task-specific implementations. To understand DL models, we propose to use Generated Input Data coming from a controllable generation process requiring knowledge about the data properties. This way, we can understand the model’s behavior by analyzing how it changes when one specific high-level input feature changes in the generated data. On the output side, Gradient-Based Attribution methods create a gradient at the end of the NN and then propagate it back to the input, indicating which low-level input features have a large influence on the model’s prediction. The resulting input features can be interpreted by humans using domain knowledge.
To improve the trainable model in terms of downstream performance, data and compute efficiency, or robustness to unwanted features, we explore principles that each address one of the training components besides the trainable model. Input Masking and Augmentation directly modifies the training input data, integrating knowledge about the data and its impact on the model’s output. We also explore the use of Feature Extraction using Pretrained Multimodal Models which can be seen as a beneficial preprocessing step to extract useful features. When no training data is available for the downstream task, using such features and domain knowledge expressed in other modalities can result in a Zero-Shot Learning (ZSL) setting, completely eliminating the trainable model. The Weak Label Generation principle produces new desired outputs using knowledge about the labels, giving either a good pretraining or even exclusive training dataset to solve the downstream task. Finally, improving and choosing the right Loss Function is another principle we explore in this thesis. Here, we enrich existing loss functions with knowledge about label interactions or utilize and combine multiple task-specific loss functions in a multitask setting.
We apply the principles to classification, regression, and representation tasks as well as to image and text modalities. We propose, apply, and evaluate existing and novel methods to understand and improve the model. Overall, this thesis introduces and evaluates methods that complement the development and choice of DL model architectures.
The ongoing and evolving usage of networks presents two critical challenges for current and future networks that require attention: (1) the task of effectively managing the vast and continually increasing data traffic and (2) the need to address the substantial number of end devices resulting from the rapid adoption of the Internet of Things. Besides these challenges, there is a mandatory need for energy consumption reduction, a more efficient resource usage, and streamlined processes without losing service quality. We comprehensively address these efforts, tackling the monitoring and quality assessment of streaming applications, a leading contributor to the total Internet traffic, as well as conducting an exhaustive analysis of the network performance within a Long Range Wide Area Network (LoRaWAN), one of the rapidly emerging LPWAN solutions.
Seit jeher üben Roboter eine Faszination auf den Menschen aus. Es ist die Ähnlichkeit zum Menschen, die technische Systeme, die mit einer höheren Intelligenz ausgestattet sind, gleichermaßen faszinierend wie erschreckend erscheinen lässt. Der Gedanke daran, technische Kreaturen zu erschaffen, die uns erhabenen menschlichen Wesen „das Wasser reichen“ oder uns gar übertreffen können, lässt uns nicht mehr los. Die Erkenntnis von dem Nutzen, den uns derartige Wesen in allen denkbaren Bereichen bringen könnten, mündet jedoch sehr schnell in eine Skepsis im Hinblick auf eine Entmündigung und Entwertung des Menschen. Denn schon heute, obgleich die Forschung in vielen Bereichen noch in den Kinderschuhen steckt, geraten wir in zahlreichen Lebensbereichen in Kontakt mit technischen Systemen, die eine starke Wirkung auf uns ausüben und viele grundlegende Fragen aufwerfen.
Die Arbeit widmet sich der ethischen Dimension autonomer (Pflege-)Systeme und thematisiert zu diesem Zweck konkrete Anwendungsszenarien. Dabei geht es nicht um allgemeine ethische Fragen, sondern konkret um den Aspekt der Vereinbarkeit autonomer technischer Systeme mit der Menschenwürde ihrer Nutzer. Auch der Gesichtspunkt des Einflusses von autonomen technischen Innovationen auf das Selbstverständnis des Menschen (Menschenbild) ist Teil der Arbeit.
Als Maßstab für moderne technische Entwicklungen dient der Würdegrundsatz aufgrund seiner enormen Bedeutung für das Recht sowie für das zugrundeliegende und allgemeine Menschenbild. Im Rahmen einer an einem humanistischen Weltbild orientierten Gesellschaft steht die Menschenwürde als oberster Wert, dem moralische und rechtliche Entwicklungen gerecht werden müssen, über allem. Daher gilt es, moderne Entwicklungen immer auch im Hinblick auf ihre Vereinbarkeit mit der Menschenwürde zu überprüfen. So lässt sich feststellen, ob ein Regulierungsbedarf besteht und wie Regulierungen im Einzelnen auszugestalten sind.
Gleichzeitig muss aber auch die Menschenwürde gesellschaftlichen Entwicklungen gerecht werden. Demgemäß wird sie vom Bundesverfassungsgericht als Grundsatz, der sich aktuellen Herausforderungen stellt und zur Erzwingung eines gesellschaftlichen Diskurses führt, angesehen.
Die hiesige Arbeit soll einen Beitrag zu der bereits angestoßenen gesellschaftlichen Debatte rund um den technischen Fortschritt und konkret um die Probleme, die mit der zunehmenden Autonomie technischer Systeme einhergehen, leisten.
This work in the field of digital literary stylistics and computational literary studies is concerned with theoretical concerns of literary genre, with the design of a corpus of nineteenth-century Spanish-American novels, and with its empirical analysis in terms of subgenres of the novel. The digital text corpus consists of 256 Argentine, Cuban, and Mexican novels from the period between 1830 and 1910. It has been created with the goal to analyze thematic subgenres and literary currents that were represented in numerous novels in the nineteenth century by means of computational text categorization methods. The texts have been gathered from different sources, encoded in the standard of the Text Encoding Initiative (TEI), and enriched with detailed bibliographic and subgenre-related metadata, as well as with structural information.
To categorize the texts, statistical classification and a family resemblance analysis relying on network analysis are used with the aim to examine how the subgenres, which are understood as communicative, conventional phenomena, can be captured on the stylistic, textual level of the novels that participate in them. The result is that both thematic subgenres and literary currents are textually coherent to degrees of 70–90 %, depending on the individual subgenre constellation, meaning that the communicatively established subgenre classifications can be accurately captured to this extent in terms of textually defined classes.
Besides the empirical focus, the dissertation also aims to relate literary theoretical genre concepts to the ones used in digital genre stylistics and computational literary studies as subfields of digital humanities. It is argued that literary text types, conventional literary genres, and textual literary genres should be distinguished on a theoretical level to improve the conceptualization of genre for digital text analysis.
Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues’ substantial consequences evoked strong efforts towards assessing the state of our environment.
Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model’s generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location’s land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model’s output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used.
Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth’s surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks.
In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks.
This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi–supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment.
In produzierenden Unternehmen werden verschiedene Vorgehensweisen zur Planung, Überwachung und Steuerung von Produktionsabläufen eingesetzt. Einer dieser Methoden wird als Vorgangsknotennetzplantechnik bezeichnet. Die einzelnen Produktionsschritte werden als Knoten definiert und durch Pfeile miteinander verbunden. Die Pfeile stellen die Beziehungen der jeweiligen Vorgänge zueinander und damit den Produktionsablauf dar. Diese Technik erlaubt den Anwendern einen umfassenden Überblick über die einzelnen Prozessrelationen. Zusätzlich können mit ihr Vorgangszeiten und Produktfertigstellungszeiten ermittelt werden, wodurch eine ausführliche Planung der Produktion ermöglicht wird. Ein Nachteil dieser Technik begründet sich in der alleinigen Darstellung einer ausführbaren Prozessabfolge. Im Falle eines Störungseintritts mit der Folge eines nicht durchführbaren Vorgangs muss von dem originären Prozess abgewichen werden. Aufgrund dessen wird eine Neuplanung erforderlich. Es werden Alternativen für den gestörten Vorgang benötigt, um eine Fortführung des Prozesses ungeachtet der Störung zu erreichen. Innerhalb dieser Arbeit wird daher eine Erweiterung der Vorgangsknotennetzplantechnik beschrieben, die es erlaubt, ergänzend zu dem geplanten Soll-Prozess Alternativvorgänge für einzelne Vorgänge darzulegen. Diese Methode wird als Maximalnetzplan bezeichnet. Die Alternativen werden im Falle eines Störungseintritts automatisch evaluiert und dem Anwender in priorisierter Reihenfolge präsentiert. Durch die Verwendung des Maximalnetzplans kann eine aufwendige Neuplanung vermieden werden. Als Anwendungsbeispiel dient ein Montageprozess, mithilfe dessen die Verwendbarkeit der Methode dargelegt wird. Weiterführend zeigt eine zeitliche Analyse zufallsbedingter Maximalnetzpläne eine Begründung zur Durchführung von Alternativen und damit den Nutzen des Maximalnetzplans auf. Zusätzlich sei angemerkt, dass innerhalb dieser Arbeit verwendete Begrifflichkeiten wie Anwender, Werker oder Mitarbeiter in maskuliner Schreibweise niedergeschrieben werden. Dieses ist ausschließlich der Einfachheit geschuldet und nicht dem Zweck der Diskriminierung anderer Geschlechter dienlich. Die verwendete Schreibweise soll alle Geschlechter ansprechen, ob männlich, weiblich oder divers.
Serverless computing is an emerging cloud computing paradigm that offers a highlevel
application programming model with utilization-based billing. It enables the
deployment of cloud applications without managing the underlying resources or
worrying about other operational aspects. Function-as-a-Service (FaaS) platforms
implement serverless computing by allowing developers to execute code on-demand
in response to events with continuous scaling while having to pay only for the
time used with sub-second metering. Cloud providers have further introduced
many fully managed services for databases, messaging buses, and storage that also
implement a serverless computing model. Applications composed of these fully
managed services and FaaS functions are quickly gaining popularity in both industry
and in academia.
However, due to this rapid adoption, much information surrounding serverless
computing is inconsistent and often outdated as the serverless paradigm evolves.
This makes the performance engineering of serverless applications and platforms
challenging, as there are many open questions, such as: What types of applications
is serverless computing well suited for, and what are its limitations? How should
serverless applications be designed, configured, and implemented? Which design
decisions impact the performance properties of serverless platforms and how can
they be optimized? These and many other open questions can be traced back to an
inconsistent understanding of serverless applications and platforms, which could
present a major roadblock in the adoption of serverless computing.
In this thesis, we address the lack of performance knowledge surrounding serverless
applications and platforms from multiple angles: we conduct empirical studies
to further the understanding of serverless applications and platforms, we introduce
automated optimization methods that simplify the operation of serverless applications,
and we enable the analysis of design tradeoffs of serverless platforms by
extending white-box performance modeling.
Since the advent of high-throughput sequencing technologies in the mid-2010s, RNA se-
quencing (RNA-seq) has been established as the method of choice for studying gene
expression. In comparison to microarray-based methods, which have mainly been used to
study gene expression before the rise of RNA-seq, RNA-seq is able to profile the entire
transcriptome of an organism without the need to predefine genes of interest. Today,
a wide variety of RNA-seq methods and protocols exist, including dual RNA sequenc-
ing (dual RNA-seq) and multi RNA sequencing (multi RNA-seq). Dual RNA-seq and
multi RNA-seq simultaneously investigate the transcriptomes of two or more species, re-
spectively. Therefore, the total RNA of all interacting species is sequenced together and
only separated in silico. Compared to conventional RNA-seq, which can only investi-
gate one species at a time, dual RNA-seq and multi RNA-seq analyses can connect the
transcriptome changes of the species being investigated and thus give a clearer picture of
the interspecies interactions. Dual RNA-seq and multi RNA-seq have been applied to a
variety of host-pathogen, mutualistic and commensal interaction systems.
We applied dual RNA-seq to a host-pathogen system of human mast cells and Staphylo-
coccus aureus (S. aureus). S. aureus, a commensal gram-positive bacterium, can become
an opportunistic pathogen and infect skin lesions of atopic dermatitis (AD) patients.
Among the first immune cells S. aureus encounters are mast cells, which have previously
been shown to be able to kill the bacteria by discharging antimicrobial products and re-
leasing extracellular traps made of protein and deoxyribonucleic acid (DNA). However,
S. aureus is known to evade the host’s immune response by internalizing within mast
cells. Our dual RNA-seq analysis of different infection settings revealed that mast cells
and S. aureus need physical contact to influence each other’s gene expression. We could
show that S. aureus cells internalizing within mast cells undergo profound transcriptome
changes to adjust their metabolism to survive in the intracellular niche. On the host side,
we found out that infected mast cells elicit a type-I interferon (IFN-I) response in an
autocrine manner and in a paracrine manner to non-infected bystander-cells. Our study
provides the first evidence that mast cells are capable to produce IFN-I upon infection
with a bacterial pathogen.
Detecting anomalies in transaction data is an important task with a high potential to avoid financial loss due to irregularities deliberately or inadvertently carried out, such as credit card fraud, occupational fraud in companies or ordering and accounting errors. With ongoing digitization of our world, data-driven approaches, including machine learning, can draw benefit from data with less manual effort and feature engineering. A large variety of machine learning-based anomaly detection methods approach this by learning a precise model of normality from which anomalies can be distinguished. Modeling normality in transactional data, however, requires to capture distributions and dependencies within the data precisely with special attention to numerical dependencies such as quantities, prices or amounts.
To implicitly model numerical dependencies, Neural Arithmetic Logic Units have been proposed as neural architecture. In practice, however, these have stability and precision issues.
Therefore, we first develop an improved neural network architecture, iNALU, which is designed to better model numerical dependencies as found in transaction data. We compare this architecture to the previous approach and show in several experiments of varying complexity that our novel architecture provides better precision and stability.
We integrate this architecture into two generative neural network models adapted for transaction data and investigate how well normal behavior is modeled. We show that both architectures can successfully model normal transaction data, with our neural architecture improving generative performance for one model.
Since categorical and numerical variables are common in transaction data, but many machine learning methods only process numerical representations, we explore different representation learning techniques to transform categorical transaction data into dense numerical vectors. We extend this approach by proposing an outlier-aware discretization, thus incorporating numerical attributes into the computation of categorical embeddings, and investigate latent spaces, as well as quantitative performance for anomaly detection.
Next, we evaluate different scenarios for anomaly detection on transaction data. We extend our iNALU architecture to a neural layer that can model both numerical and non-numerical dependencies and evaluate it in a supervised and one-class setting. We investigate the stability and generalizability of our approach and show that it outperforms a variety of models in the balanced supervised setting and performs comparably in the one-class setting. Finally, we evaluate three approaches to using a generative model as an anomaly detector and compare the anomaly detection performance.
Latency is an inherent problem of computing systems. Each computation takes time until the result is available. Virtual reality systems use elaborated computer resources to create virtual experiences. The latency of those systems is often ignored or assumed as small enough to provide a good experience.
This cumulative thesis is comprised of published peer reviewed research papers exploring the behaviour and effects of latency. Contrary to the common description of time invariant latency, latency is shown to fluctuate. Few other researchers have looked into this time variant behaviour. This thesis explores time variant latency with a focus on randomly occurring latency spikes. Latency spikes are observed both for small algorithms and as end to end latency in complete virtual reality systems. Most latency measurements gather close to the mean latency with potentially multiple smaller clusters of larger latency values and rare extreme outliers. The latency behaviour differs for different implementations of an algorithm. Operating system schedulers and programming language environments such as garbage collectors contribute to the overall latency behaviour. The thesis demonstrates these influences on the example of different implementations of message passing.
The plethora of latency sources result in an unpredictable latency behaviour. Measuring and reporting it in scientific experiments is important. This thesis describes established approaches to measuring latency and proposes an enhanced setup to gather detailed information. The thesis proposes to dissect the measured data with a stacked z-outlier-test to separate the clusters of latency measurements for better reporting.
Latency in virtual reality applications can degrade the experience in multiple ways. The thesis focuses on cybersickness as a major detrimental effect. An approach to simulate time variant latency is proposed to make latency available as an independent variable in experiments to understand latency's effects. An experiment with modified latency shows that latency spikes can contribute to cybersickness. A review of related research shows that different time invariant latency behaviour also contributes to cybersickness.
The digital transformation facilitates new forms of collaboration between companies along the supply chain and between companies and consumers. Besides sharing information on centralized platforms, blockchain technology is often regarded as a potential basis for this kind of collaboration. However, there is much hype surrounding the technology due to the rising popularity of cryptocurrencies, decentralized finance (DeFi), and non-fungible tokens (NFTs). This leads to potential issues being overlooked. Therefore, this thesis aims to investigate, highlight, and address the current weaknesses of blockchain technology: Inefficient consensus, privacy, smart contract security, and scalability.
First, to provide a foundation, the four key challenges are introduced, and the research objectives are defined, followed by a brief presentation of the preliminary work for this thesis.
The following four parts highlight the four main problem areas of blockchain. Using big data analytics, we extracted and analyzed the blockchain data of six major blockchains to identify potential weaknesses in their consensus algorithm. To improve smart contract security, we classified smart contract functionalities to identify similarities in structure and design. The resulting taxonomy serves as a basis for future standardization efforts for security-relevant features, such as safe math functions and oracle services. To challenge privacy assumptions, we researched consortium blockchains from an adversary role. We chose four blockchains with misconfigured nodes and extracted as much information from those nodes as possible. Finally, we compared scalability solutions for blockchain applications and developed a decision process that serves as a guideline to improve the scalability of their applications.
Building on the scalability framework, we showcase three potential applications for blockchain technology. First, we develop a token-based approach for inter-company value stream mapping. By only relying on simple tokens instead of complex smart-contracts, the computational load on the network is expected to be much lower compared to other solutions. The following two solutions use offloading transactions and computations from the main blockchain. The first approach uses secure multiparty computation to offload the matching of supply and demand for manufacturing capacities to a trustless network. The transaction is written to the main blockchain only after the match is made. The second approach uses the concept of payment channel networks to enable high-frequency bidirectional micropayments for WiFi sharing. The host gets paid for every second of data usage through an off-chain channel. The full payment is only written to the blockchain after the connection to the client gets terminated.
Finally, the thesis concludes by briefly summarizing and discussing the results and providing avenues for further research.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.
With the miniaturization of satellites a fundamental change took place in the space industry. Instead of single big monolithic satellites nowadays more and more systems are envisaged consisting of a number of small satellites to form cooperating systems in space. The lower costs for development and launch as well as the spatial distribution of these systems enable the implementation of new scientific missions and commercial services.
With this paradigm shift new challenges constantly emerge for satellite developers, particularly in the area of wireless communication systems and network protocols.
Satellites in low Earth orbits and ground stations form dynamic space-terrestrial networks. The characteristics of these networks differ fundamentally from those of other networks.
The resulting challenges with regard to communication system design, system analysis, packet forwarding, routing and medium access control as well as challenges concerning the reliability and efficiency of wireless communication links are addressed in this thesis.
The physical modeling of space-terrestrial networks is addressed by analyzing existing satellite systems and communication devices, by evaluating measurements and by implementing a simulator for space-terrestrial networks.
The resulting system and channel models were used as a basis for the prediction of the dynamic network topologies, link properties and channel interference. These predictions allowed for the implementation of efficient routing and medium access control schemes for space-terrestrial networks. Further, the implementation and utilization of software-defined ground stations is addressed, and a data upload scheme for the operation of small satellite formations is presented.
Novel deep learning (DL) architectures, better data availability, and a significant increase in computing power have enabled scientists to solve problems that were considered unassailable for many years. A case in point is the “protein folding problem“, a 50-year-old grand challenge in biology that was recently solved by the DL-system AlphaFold. Other examples comprise the development of large DL-based language models that, for instance, generate newspaper articles that hardly differ from those written by humans. However, developing unbiased, reliable, and accurate DL models for various practical applications remains a major challenge - and many promising DL projects get stuck in the piloting stage, never to be completed. In light of these observations, this thesis investigates the practical challenges encountered throughout the life cycle of DL projects and proposes solutions to develop and deploy rigorous DL models.
The first part of the thesis is concerned with prototyping DL solutions in different domains. First, we conceptualize guidelines for applied image recognition and showcase their application in a biomedical research project. Next, we illustrate the bottom-up development of a DL backend for an augmented intelligence system in the manufacturing sector. We then turn to the fashion domain and present an artificial curation system for individual fashion outfit recommendations that leverages DL techniques and unstructured data from social media and fashion blogs. After that, we showcase how DL solutions can assist fashion designers in the creative process. Finally, we present our award-winning DL solution for the segmentation of glomeruli in human kidney tissue images that was developed for the Kaggle data science competition HuBMAP - Hacking the Kidney.
The second part continues the development path of the biomedical research project beyond the prototyping stage. Using data from five laboratories, we show that ground truth estimation from multiple human annotators and training of DL model ensembles help to establish objectivity, reliability, and validity in DL-based bioimage analyses.
In the third part, we present deepflash2, a DL solution that addresses the typical challenges encountered during training, evaluation, and application of DL models in bioimaging. The tool facilitates the objective and reliable segmentation of ambiguous bioimages through multi-expert annotations and integrated quality assurance. It is embedded in an easy-to-use graphical user interface and offers best-in-class predictive performance for semantic and instance segmentation under economical usage of computational resources.
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025.
The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020.
However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development.
Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment.
The application of Wireless Sensor Networks (WSNs) with a large number of tiny, cost-efficient, battery-powered sensor nodes that are able to communicate directly with each other poses many challenges.
Due to the large number of communicating objects and despite a used CSMA/CA MAC protocol, there may be many signal collisions.
In addition, WSNs frequently operate under harsh conditions and nodes are often prone to failure, for example, due to a depleted battery or unreliable components.
Thus, nodes or even large parts of the network can fail.
These aspects lead to reliable data dissemination and data storage being a key issue.
Therefore, these issues are addressed herein while keeping latency low, throughput high, and energy consumption reduced.
Furthermore, simplicity as well as robustness to changes in conditions are essential here.
In order to achieve these aims, a certain amount of redundancy has to be included.
This can be realized, for example, by using network coding.
Existing approaches, however, often only perform well under certain conditions or for a specific scenario, have to perform a time-consuming initialization, require complex calculations, or do not provide the possibility of early decoding.
Therefore, we developed a network coding procedure called Broadcast Growth Codes (BCGC) for reliable data dissemination, which performs well under a broad range of diverse conditions.
These can be a high probability of signal collisions, any degree of nodes' mobility, a large number of nodes, or occurring node failures, for example.
BCGC do not require complex initialization and only use simple XOR operations for encoding and decoding.
Furthermore, decoding can be started as soon as a first packet/codeword has been received.
Evaluations by using an in-house implemented network simulator as well as a real-world testbed showed that BCGC enhance reliability and enable to retrieve data dependably despite an unreliable network.
In terms of latency, throughput, and energy consumption, depending on the conditions and the procedure being compared, BCGC can achieve the same performance or even outperform existing procedures significantly while being robust to changes in conditions and allowing low complexity of the nodes as well as early decoding.
In today's world, circumstances, processes, and requirements for systems in general-in this thesis a special focus is given to the context of Cyber-Physical Systems (CPS)-are becoming increasingly complex and dynamic.
In order to operate properly in such dynamic environments, systems must adapt to dynamic changes, which has led to the research area of Self-Adaptive Systems (SAS).
These systems can deal with changes in their environment and the system itself.
In our daily lives, we come into contact with many different self-adaptive systems that are designed to support and improve our way of life.
In this work we focus on the two domains Intelligent Transportation Systems (ITS) and logistics as both domains provide complex and adaptable use cases to prototypical apply the contributions of this thesis.
However, the contributions are not limited to these areas and can be generalized also to other domains such as the general area of CPS and Internet of Things including smart grids or even intelligent computer networks.
In ITS, real-time traffic control is an example adaptive system that monitors the environment, analyzes observations, and plans and executes adaptation actions.
Another example is platooning, which is the ability of vehicles to drive with close inter-vehicle distances.
This technology enables an increase in road throughput and safety, which directly addresses the increased infrastructure needs due to increased traffic on the roads.
In logistics, the Vehicle Routing Problem (VRP) deals with the planning of road freight transport tours.
To cope with the ever-increasing transport volume due to the rise of just-in-time production and online shopping, efficient and correct route planning for transports is important.
Further, warehouses play a central role in any company's supply chain and contribute to the logistical success.
The processes of storage assignment and order picking are the two main tasks in mezzanine warehouses highly affected by a dynamic environment.
Usually, optimization algorithms are applied to find solutions in reasonable computation time.
SASes can help address these dynamics by allowing systems to deal with changing demands and constraints.
For the application of SASes in the two areas ITS and logistics, the definition of adaptation planning strategies is the key success factor.
A wide range of adaptation planning strategies for different domains can be found in the literature, and the operator must select the most promising strategy for the problem at hand.
However, the No-Free-Lunch theorem states that the performance of one strategy is not necessarily transferable to other problems.
Accordingly, the algorithm selection problem, first defined in 1976, aims to find the best performing algorithm for the current problem.
Since then, this problem has been explored more and more, and the machine learning community, for example, considers it a learning problem.
The ideas surrounding the algorithm selection problem have been applied in various use cases, but little research has been done to generalize the approaches.
Moreover, especially in the field of SASes, the selection of the most appropriate strategy depends on the current situation of the system.
Techniques for identifying the situation of a system can be found in the literature, such as the use of rules or clustering techniques.
This knowledge can then be used to improve the algorithm selection, or in the scope of this thesis, to improve the selection of adaptation planning strategies.
In addition, knowledge about the current situation and the performance of strategies in similar previously observed situations provides another opportunity for improvements.
This ongoing learning and reasoning about the system and its environment is found in the research area Self-Aware Computing (SeAC).
In this thesis, we explore common characteristics of adaptation planning strategies in the domain of ITS and logistics presenting a self-aware optimization framework for adaptation planning strategies.
We consider platooning coordination strategies from ITS and optimization techniques from logistics as adaptation planning strategies that can be exchanged during operation to better reflect the current situation.
Further, we propose to integrate fairness and uncertainty handling mechanisms directly into the adaptation planning strategies.
We then examine the complex structure of the logistics use cases VRP and mezzanine warehouses and identify their systems-of-systems structure.
We propose a two-stage approach for vertical or nested systems and propose to consider the impact of intertwining horizontal or coexisting systems.
More specifically, we summarize the six main contributions of this thesis as follows:
First, we analyze specific characteristics of adaptation planning strategies with a particular focus on ITS and logistics.
We use platooning and route planning in highly dynamic environments as representatives of ITS and we use the rich Vehicle Routing Problem (rVRP) and mezzanine warehouses as representatives of the logistics domain.
Using these case studies, we derive the need for situation-aware optimization of adaptation planning strategies and argue that fairness is an important consideration when applying these strategies in ITS.
In logistics, we discuss that these complex systems can be considered as systems-of-systems and this structure affects each subsystem.
Hence, we argue that the consideration of these characteristics is a crucial factor for the success of the system.
Second, we design a self-aware optimization framework for adaptation planning strategies.
The optimization framework is abstracted into a third layer above the application and its adaptation planning system, which allows the concept to be applied to a diverse set of use cases.
Further, the Domain Data Model (DDM) used to configure the framework enables the operator to easily apply it by defining the available adaptation planning strategies, parameters to be optimized, and performance measures.
The framework consists of four components: (i) Coordination, (ii) Situation Detection, (iii) Strategy Selection, and (iv) Parameter Optimization.
While the coordination component receives observations and triggers the other components, the situation detection applies rules or clustering techniques to identify the current situation.
The strategy selection uses this knowledge to select the most promising strategy for the current situation, and the parameter optimization applies optimization algorithms to tune the parameters of the strategy.
Moreover, we apply the concepts of the SeAC domain and integrate learning and reasoning processes to enable ongoing advancement of the framework.
We evaluate our framework using the platooning use case and consider platooning coordination strategies as the adaptation planning strategies to be selected and optimized.
Our evaluation shows that the framework is able to select the most appropriate adaptation strategy and learn the situational behavior of the system.
Third, we argue that fairness aspects, previously identified as an important characteristic of adaptation planning strategies, are best addressed directly as part of the strategies.
Hence, focusing on platooning as an example use case, we propose a set of fairness mechanisms to balance positive and negative effects of platooning among all participants in a platoon.
We design six vehicle sequence rotation mechanisms that continuously change the leader position among all participants, as this is the position with the least positive effects.
We analyze these strategies on roads of different sizes and with different traffic volumes, and show that these mechanisms should also be chosen wisely.
Fourth, we address the uncertainty characteristic of adaptation planning strategies and propose a methodology to account for uncertainty and also address it directly as part of the adaptation planning strategies.
We address the use case of fueling planning along a route associated with highly dynamic fuel prices and develop six utility functions that account for different aspects of route planning.
Further, we incorporate uncertainty measures for dynamic fuel prices by adding penalties for longer travel times or greater distance to the next gas station.
Through this approach, we are able to reduce the uncertainty at planning time and obtain a more robust route planning.
Fifth, we analyze optimization of nested systems-of-systems for the use case rVRP.
Before proposing an approach to deal with the complex structure of the problem, we analyze important constraints and objectives that need to be considered when formulating a real-world rVRP.
Then, we propose a two-stage workflow to optimize both systems individually, flexibly, and interchangeably.
We apply Genetic Algorithms and Ant Colony Optimization (ACO) to both nested systems and compare the performance of our workflow with state-of-the-art optimization algorithms for this use case.
In our evaluation, we show that the proposed two-stage workflow is able to handle the complex structure of the problem and consider all real-world constraints and objectives.
Finally, we study coexisting systems-of-systems by optimizing typical processes in mezzanine warehouses.
We first define which ergonomic and economic constraints and objectives must be considered when addressing a real-world problem.
Then, we analyze the interrelatedness of the storage assignment and order picking problems; we identify opportunities to design optimization approaches that optimize all objectives and aim for a good overall system performance, taking into account the interdependence of both systems.
We use the NSGA-II for storage assignment and Ant Colony Optimization (ACO) for order picking and adapt them to the specific requirements of horizontal systems-of-systems.
In our evaluation, we compare our approaches to state-of-the-art approaches in mezzanine warehouses and show that our proposed approaches increase the system performance.
Our proposed approaches provide important contributions to both academic research and practical applications.
To the best of our knowledge, we are the first to design a self-aware optimization framework for adaptation planning strategies that integrates situation-awareness, algorithm selection, parameter tuning, as well as learning and reasoning.
Our evaluation of platooning coordination shows promising results for the application of the framework.
Moreover, our proposed strategies to compensate for negative effects of platooning represent an important milestone, which could lead to higher acceptance of this technology in society and support its future adoption in the real world.
The proposed methodology and utility functions that address uncertainty are an important step to improving the capabilities of SAS in an increasingly turbulent environment.
Similarly, our contributions to systems-of-systems optimization are major contributions to the state of logistics and systems-of-systems research.
Finally, we select real-world use cases for the application of our approaches and cooperate with industrial partners, which highlights the practical relevance of our contributions.
The reduction of manual effort and required expert knowledge in our self-aware optimization framework is a milestone in bridging the gap between academia and practice.
One of our partners integrated the two-stage approach to tackling the rVRP into its software system, improving both time to solution and solution quality.
In conclusion, the contributions of this thesis have spawned several research projects such as a long-term industrial project on optimizing tours and routes in parcel delivery funded by Bayerisches Verbundforschungsprogramm (BayVFP) – Digitalisierung and further collaborations, opening up many promising avenues for future research.
One consequence of the recent coronavirus pandemic is increased demand and use of online services around the globe. At the same time, performance requirements for modern technologies are becoming more stringent as users become accustomed to higher standards. These increased performance and availability requirements, coupled with the unpredictable usage growth, are driving an increasing proportion of applications to run on public cloud platforms as they promise better scalability and reliability.
With data centers already responsible for about one percent of the world's power consumption, optimizing resource usage is of paramount importance. Simultaneously, meeting the increasing and changing resource and performance requirements is only possible by optimizing resource management without introducing additional overhead. This requires the research and development of new modeling approaches to understand the behavior of running applications with minimal information.
However, the emergence of modern software paradigms makes it increasingly difficult to derive such models and renders previous performance modeling techniques infeasible. Modern cloud applications are often deployed as a collection of fine-grained and interconnected components called microservices. Microservice architectures offer massive benefits but also have broad implications for the performance characteristics of the respective systems. In addition, the microservices paradigm is typically paired with a DevOps culture, resulting in frequent application and deployment changes. Such applications are often referred to as cloud-native applications. In summary, the increasing use of ever-changing cloud-hosted microservice applications introduces a number of unique challenges for modeling the performance of modern applications. These include the amount, type, and structure of monitoring data, frequent behavioral changes, or infrastructure variabilities. This violates common assumptions of the state of the art and opens a research gap for our work.
In this thesis, we present five techniques for automated learning of performance models for cloud-native software systems. We achieve this by combining machine learning with traditional performance modeling techniques. Unlike previous work, our focus is on cloud-hosted and continuously evolving microservice architectures, so-called cloud-native applications. Therefore, our contributions aim to solve the above challenges to deliver automated performance models with minimal computational overhead and no manual intervention. Depending on the cloud computing model, privacy agreements, or monitoring capabilities of each platform, we identify different scenarios where performance modeling, prediction, and optimization techniques can provide great benefits. Specifically, the contributions of this thesis are as follows:
Monitorless: Application-agnostic prediction of performance degradations.
To manage application performance with only platform-level monitoring, we propose Monitorless, the first truly application-independent approach to detecting performance degradation. We use machine learning to bridge the gap between platform-level monitoring and application-specific measurements, eliminating the need for application-level monitoring. Monitorless creates a single and holistic resource saturation model that can be used for heterogeneous and untrained applications. Results show that Monitorless infers resource-based performance degradation with 97% accuracy. Moreover, it can achieve similar performance to typical autoscaling solutions, despite using less monitoring information.
SuanMing: Predicting performance degradation using tracing.
We introduce SuanMing to mitigate performance issues before they impact the user experience. This contribution is applied in scenarios where tracing tools enable application-level monitoring. SuanMing predicts explainable causes of expected performance degradations and prevents performance degradations before they occur. Evaluation results show that SuanMing can predict and pinpoint future performance degradations with an accuracy of over 90%.
SARDE: Continuous and autonomous estimation of resource demands.
We present SARDE to learn application models for highly variable application deployments. This contribution focuses on the continuous estimation of application resource demands, a key parameter of performance models. SARDE represents an autonomous ensemble estimation technique. It dynamically and continuously optimizes, selects, and executes an ensemble of approaches to estimate resource demands in response to changes in the application or its environment. Through continuous online adaptation, SARDE efficiently achieves an average resource demand estimation error of 15.96% in our evaluation.
DepIC: Learning parametric dependencies from monitoring data.
DepIC utilizes feature selection techniques in combination with an ensemble regression approach to automatically identify and characterize parametric dependencies. Although parametric dependencies can massively improve the accuracy of performance models, DepIC is the first approach to automatically learn such parametric dependencies from passive monitoring data streams. Our evaluation shows that DepIC achieves 91.7% precision in identifying dependencies and reduces the characterization prediction error by 30% compared to the best individual approach.
Baloo: Modeling the configuration space of databases.
To study the impact of different configurations within distributed DBMSs, we introduce Baloo. Our last contribution models the configuration space of databases considering measurement variabilities in the cloud. More specifically, Baloo dynamically estimates the required benchmarking measurements and automatically builds a configuration space model of a given DBMS. Our evaluation of Baloo on a dataset consisting of 900 configuration points shows that the framework achieves a prediction error of less than 11% while saving up to 80% of the measurement effort.
Although the contributions themselves are orthogonally aligned, taken together they provide a holistic approach to performance management of modern cloud-native microservice applications.
Our contributions are a significant step forward as they specifically target novel and cloud-native software development and operation paradigms, surpassing the capabilities and limitations of previous approaches.
In addition, the research presented in this paper also has a significant impact on the industry, as the contributions were developed in collaboration with research teams from Nokia Bell Labs, Huawei, and Google.
Overall, our solutions open up new possibilities for managing and optimizing cloud applications and improve cost and energy efficiency.
Die Raumfahrt ist eine der konservativsten Industriebranchen. Neue Entwicklungen von Komponenten und Systemen beruhen auf existierenden Standards und eigene Erfahrungen der Entwickler. Die Systeme sollen in einem vorgegebenen engen Zeitrahmen projektiert, in sehr kleiner Stückzahl gefertigt und schließlich aufwendig qualifiziert werden. Erfahrungsgemäß reicht die Zeit für Entwicklungsiterationen und weitgehende Perfektionierung des Systems oft nicht aus. Fertige Sensoren, Subsysteme und Systeme sind Unikate, die nur für eine bestimme Funktion und in manchen Fällen sogar nur für bestimmte Missionen konzipiert sind. Eine Neuentwicklung solcher Komponenten ist extrem teuer und risikobehaftet. Deswegen werden flugerprobte Systeme ohne Änderungen und Optimierung mehrere Jahre eingesetzt, ohne Technologiefortschritte zu berücksichtigen.
Aufgrund des enormen finanziellen Aufwandes und der Trägheit ist die konventionelle Vorgehensweise in der Entwicklung nicht direkt auf Kleinsatelliten übertragbar. Eine dynamische Entwicklung im Low Cost Bereich benötigt eine universale und für unterschiedliche Anwendungsbereiche leicht modifizierbare Strategie. Diese Strategie soll nicht nur flexibel sein, sondern auch zu einer möglichst optimalen und effizienten Hardwarelösung führen.
Diese Arbeit stellt ein Software-Tool für eine zeit- und kosteneffiziente Entwicklung von Sternsensoren für Kleinsatelliten vor. Um eine maximale Leistung des Komplettsystems zu erreichen, soll der Sensor die Anforderungen und Randbedingungen vorgegebener Anwendungen erfüllen und darüber hinaus für diese Anwendungen optimiert sein. Wegen der komplexen Zusammenhänge zwischen den Parametern optischer Sensorsysteme ist keine
„straightforward" Lösung des Problems möglich. Nur durch den Einsatz computerbasierter Optimierungsverfahren kann schnell und effizient ein bestmögliches Systemkonzept für die gegebenen Randbedingungen ausgearbeitet werden.
Human-computer interfaces have the potential to support mental health practitioners in alleviating mental distress.
Adaption of this technology in practice is, however, slow.
We provide means to extend the design space of human-computer interfaces for mitigating mental distress.
To this end, we suggest three complementary approaches: using presentation technology, using virtual environments, and using communication technology to facilitate social interaction.
We provide new evidence that elementary aspects of presentation technology affect the emotional processing of virtual stimuli, that perception of our environment affects the way we assess our environment, and that communication technologies affect social bonding between users.
By showing how interfaces modify emotional reactions and facilitate social interaction, we provide converging evidence that human-computer interfaces can help alleviate mental distress.
These findings may advance the goal of adapting technological means to the requirements of mental health practitioners.
This thesis deals with the first part of a larger project that follows the ultimate goal of implementing a software tool that creates a Mission Control Room in Virtual Reality. The software is to be used for the operation of spacecrafts and is specially developed for the unique real-time requirements of unmanned satellite missions. Beginning from launch, throughout the whole mission up to the recovery or disposal of the satellite, all systems need to be monitored and controlled in continuous intervals, to ensure the mission’s success. Mission Operation is an essential part of every space mission and has been undertaken for decades. Recent technological advancements in the realm of immersive technologies pave the way for innovative methods to operate spacecrafts. Virtual Reality has the capability to resolve the physical constraints set by traditional Mission Control Rooms and thereby delivers novel opportunities. The paper highlights underlying theoretical aspects of Virtual Reality, Mission Control and IP Communication. However, the focus lies upon the practical part of this thesis which revolves around the first steps of the implementation of the virtual Mission Control Room in the Unity Game Engine. Overall, this paper serves as a demonstration of Virtual Reality technology and shows its possibilities with respect to the operation of spacecrafts.
The importance of proactive and timely prediction of critical events is steadily increasing, whether in the manufacturing industry or in private life. In the past, machines in the manufacturing industry were often maintained based on a regular schedule or threshold violations, which is no longer competitive as it causes unnecessary costs and downtime. In contrast, the predictions of critical events in everyday life are often much more concealed and hardly noticeable to the private individual, unless the critical event occurs. For instance, our electricity provider has to ensure that we, as end users, are always supplied with sufficient electricity, or our favorite streaming service has to guarantee that we can watch our favorite series without interruptions. For this purpose, they have to constantly analyze what the current situation is, how it will develop in the near future, and how they have to react in order to cope with future conditions without causing power outages or video stalling.
In order to analyze the performance of a system, monitoring mechanisms are often integrated to observe characteristics that describe the workload and the state of the system and its environment. Reactive systems typically employ thresholds, utility functions, or models to determine the current state of the system. However, such reactive systems cannot proactively estimate future events, but only as they occur. In the case of critical events, reactive determination of the current system state is futile, whereas a proactive system could have predicted this event in advance and enabled timely countermeasures. To achieve proactivity, the system requires estimates of future system states. Given the gap between design time and runtime, it is typically not possible to use expert knowledge to a priori model all situations a system might encounter at runtime. Therefore, prediction methods must be integrated into the system. Depending on the available monitoring data and the complexity of the prediction task, either time series forecasting in combination with thresholding or more sophisticated machine and deep learning models have to be trained.
Although numerous forecasting methods have been proposed in the literature, these methods have their advantages and disadvantages depending on the characteristics of the time series under consideration. Therefore, expert knowledge is required to decide which forecasting method to choose. However, since the time series observed at runtime cannot be known at design time, such expert knowledge cannot be implemented in the system. In addition to selecting an appropriate forecasting method, several time series preprocessing steps are required to achieve satisfactory forecasting accuracy. In the literature, this preprocessing is often done manually, which is not practical for autonomous computing systems, such as Self-Aware Computing Systems. Several approaches have also been presented in the literature for predicting critical events based on multivariate monitoring data using machine and deep learning. However, these approaches are typically highly domain-specific, such as financial failures, bearing failures, or product failures. Therefore, they require in-depth expert knowledge. For this reason, these approaches cannot be fully automated and are not transferable to other use cases. Thus, the literature lacks generalizable end-to-end workflows for modeling, detecting, and predicting failures that require only little expert knowledge.
To overcome these shortcomings, this thesis presents a system model for meta-self-aware prediction of critical events based on the LRA-M loop of Self-Aware Computing Systems. Building upon this system model, this thesis provides six further contributions to critical event prediction. While the first two contributions address critical event prediction based on univariate data via time series forecasting, the three subsequent contributions address critical event prediction for multivariate monitoring data using machine and deep learning algorithms. Finally, the last contribution addresses the update procedure of the system model. Specifically, the seven main contributions of this thesis can be summarized as follows:
First, we present a system model for meta self-aware prediction of critical events. To handle both univariate and multivariate monitoring data, it offers univariate time series forecasting for use cases where a single observed variable is representative of the state of the system, and machine learning algorithms combined with various preprocessing techniques for use cases where a large number of variables are observed to characterize the system’s state. However, the two different modeling alternatives are not disjoint, as univariate time series forecasts can also be included to estimate future monitoring data as additional input to the machine learning models. Finally, a feedback loop is incorporated to monitor the achieved prediction quality and trigger model updates.
We propose a novel hybrid time series forecasting method for univariate, seasonal time series, called Telescope. To this end, Telescope automatically preprocesses the time series, performs a kind of divide-and-conquer technique to split the time series into multiple components, and derives additional categorical information. It then forecasts the components and categorical information separately using a specific state-of-the-art method for each component. Finally, Telescope recombines the individual predictions. As Telescope performs both preprocessing and forecasting automatically, it represents a complete end-to-end approach to univariate seasonal time series forecasting. Experimental results show that Telescope achieves enhanced forecast accuracy, more reliable forecasts, and a substantial speedup. Furthermore, we apply Telescope to the scenario of predicting critical events for virtual machine auto-scaling. Here, results show that Telescope considerably reduces the average response time and significantly reduces the number of service level objective violations.
For the automatic selection of a suitable forecasting method, we introduce two frameworks for recommending forecasting methods. The first framework extracts various time series characteristics to learn the relationship between them and forecast accuracy. In contrast, the other framework divides the historical observations into internal training and validation parts to estimate the most appropriate forecasting method. Moreover, this framework also includes time series preprocessing steps. Comparisons between the proposed forecasting method recommendation frameworks and the individual state-of-the-art forecasting methods and the state-of-the-art forecasting method recommendation approach show that the proposed frameworks considerably improve the forecast accuracy.
With regard to multivariate monitoring data, we first present an end-to-end workflow to detect critical events in technical systems in the form of anomalous machine states. The end-to-end design includes raw data processing, phase segmentation, data resampling, feature extraction, and machine tool anomaly detection. In addition, the workflow does not rely on profound domain knowledge or specific monitoring variables, but merely assumes standard machine monitoring data. We evaluate the end-to-end workflow using data from a real CNC machine. The results indicate that conventional frequency analysis does not detect the critical machine conditions well, while our workflow detects the critical events very well with an F1-score of almost 91%.
To predict critical events rather than merely detecting them, we compare different modeling alternatives for critical event prediction in the use case of time-to-failure prediction of hard disk drives. Given that failure records are typically significantly less frequent than instances representing the normal state, we employ different oversampling strategies. Next, we compare the prediction quality of binary class modeling with downscaled multi-class modeling. Furthermore, we integrate univariate time series forecasting into the feature generation process to estimate future monitoring data. Finally, we model the time-to-failure using not only classification models but also regression models. The results suggest that multi-class modeling provides the overall best prediction quality with respect to practical requirements. In addition, we prove that forecasting the features of the prediction model significantly improves the critical event prediction quality.
We propose an end-to-end workflow for predicting critical events of industrial machines. Again, this approach does not rely on expert knowledge except for the definition of monitoring data, and therefore represents a generalizable workflow for predicting critical events of industrial machines. The workflow includes feature extraction, feature handling, target class mapping, and model learning with integrated hyperparameter tuning via a grid-search technique. Drawing on the result of the previous contribution, the workflow models the time-to-failure prediction in terms of multiple classes, where we compare different labeling strategies for multi-class classification. The evaluation using real-world production data of an industrial press demonstrates that the workflow is capable of predicting six different time-to-failure windows with a macro F1-score of 90%. When scaling the time-to-failure classes down to a binary prediction of critical events, the F1-score increases to above 98%.
Finally, we present four update triggers to assess when critical event prediction models should be re-trained during on-line application. Such re-training is required, for instance, due to concept drift. The update triggers introduced in this thesis take into account the elapsed time since the last update, the prediction quality achieved on the current test data, and the prediction quality achieved on the preceding test data. We compare the different update strategies with each other and with the static baseline model. The results demonstrate the necessity of model updates during on-line application and suggest that the update triggers that consider both the prediction quality of the current and preceding test data achieve the best trade-off between prediction quality and number of updates required.
We are convinced that the contributions of this thesis constitute significant impulses for the academic research community as well as for practitioners. First of all, to the best of our knowledge, we are the first to propose a fully automated, end-to-end, hybrid, component-based forecasting method for seasonal time series that also includes time series preprocessing. Due to the combination of reliably high forecast accuracy and reliably low time-to-result, it offers many new opportunities in applications requiring accurate forecasts within a fixed time period in order to take timely countermeasures. In addition, the promising results of the forecasting method recommendation systems provide new opportunities to enhance forecasting performance for all types of time series, not just seasonal ones. Furthermore, we are the first to expose the deficiencies of the prior state-of-the-art forecasting method recommendation system.
Concerning the contributions to critical event prediction based on multivariate monitoring data, we have already collaborated closely with industrial partners, which supports the practical relevance of the contributions of this thesis. The automated end-to-end design of the proposed workflows that do not demand profound domain or expert knowledge represents a milestone in bridging the gap between academic theory and industrial application. Finally, the workflow for predicting critical events in industrial machines is currently being operationalized in a real production system, underscoring the practical impact of this thesis.
A graph is an abstract network that represents a set of objects, called vertices, and relations between these objects, called edges. Graphs can model various networks. For example, a social network where the vertices correspond to users of the network and the edges represent relations between the users. To better see the structure of a graph it is helpful to visualize it. A standard visualization is a node-link diagram in the Euclidean plane. In such a representation the vertices are drawn as points in the plane and edges are drawn as Jordan curves between every two vertices connected by an edge. Edge crossings decrease the readability of a drawing, therefore, Crossing Optimization is a fundamental problem in Computer Science. This book explores the research frontiers and introduces novel approaches in Crossing Optimization.
Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing.
To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work.
The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components.
In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites.
A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes.
Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components.
All systems can access each other via dynamic routes to perform service-based M2M communication.
With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way.
The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution.
The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems.
Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses.
The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions.
Frequently, port scans are early indicators of more serious attacks. Unfortunately, the detection of slow port scans in company networks is challenging due to the massive amount of network data. This paper proposes an innovative approach for preprocessing flow-based data which is specifically tailored to the detection of slow port scans. The preprocessing chain generates new objects based on flow-based data aggregated over time windows while taking domain knowledge as well as additional knowledge about the network structure into account. The computed objects are used as input for the further analysis. Based on these objects, we propose two different approaches for detection of slow port scans. One approach is unsupervised and uses sequential hypothesis testing whereas the other approach is supervised and uses classification algorithms. We compare both approaches with existing port scan detection algorithms on the flow-based CIDDS-001 data set. Experiments indicate that the proposed approaches achieve better detection rates and exhibit less false alarms than similar algorithms.
Remote sensing time series is the collection or acquisition of remote sensing data in a
fixed equally spaced time period over a particular area or for the whole world. Near
daily high spatial resolution data is very much needed for remote sensing applications
such as agriculture monitoring, phenology change detection, environmental
monitoring and so on. Remote sensing applications can produce better and accurate
results if they are provided with dense and accurate time series of data. The current
remote sensing satellite architecture is still not capable of providing near daily
or daily high spatial resolution images to fulfill the needs of the above mentioned
remote sensing applications. Limitations in sensors, high development, operational
costs of satellites and presence of clouds blocking the area of observation are some
of the reasons that makes near daily or daily high spatial resolution optical remote
sensing data highly challenging to achieve. With developments in the optical sensor
systems and well planned remote sensing satellite constellations, this condition
can be improved but it comes at a cost. Even then the issue will not be completely
resolved and thus the growing need for high temporal and high spatial resolution
data cannot be fulfilled entirely. Because the data collection process relies on satellites
which are physical system, these can fail unpredictably due to various reasons
and cause a complete loss of observation for a given period of time making a gap
in the time series. Moreover, to observe the long term trend in phenology change
due to rapidly changing environmental conditions, the remote sensing data from
the present is not just sufficient, the data from the past is also important. A better
alternative solution for this issue can be the generation of remote sensing time series
by fusing data from multiple remote sensing satellite which has different spatial and
temporal resolutions. This approach will be effective and efficient. In this method
a high temporal low spatial resolution image from a satellite such as Sentinel-2 can
be fused with a low temporal and high spatial resolution image from a satellite such
as the Sentinel-3 to generate a synthetic high temporal high spatial resolution data.
Remote sensing time series generation by data fusion methods can be applied to
the satellite images captured currently as well as the images captured by the satellites
in the past. This will provide the much needed high temporal and high spatial
resolution images for remote sensing applications. This approach with its simplistic
nature is cost effective and provides the researchers the means to generate the
data needed for their application on their own from the limited source of data available
to them. An efficient data fusion approach in combination with a well planned
satellite constellation can offer a solution which will ensure near daily time series of
remote sensing data with out any gap. The aim of this research work is to develop
an efficient data fusion approaches to achieve dense remote sensing time series.
Corfu is a framework for satellite software, not only for the onboard part but also for the ground. Developing software with Corfu follows an iterative model-driven approach. The basis of the process is an engineering model. Engineers formally describe the basic structure of the onboard software in configuration files, which build the engineering model. In the first step, Corfu verifies the model at different levels. Not only syntactically and semantically but also on a higher level such as the scheduling.
Based on the model, Corfu generates a software scaffold, which follows an application-centric approach. Software images onboard consist of a list of applications connected through communication channels called topics. Corfu’s generic and generated code covers this fundamental communication, telecommand, and telemetry handling. All users have to do is inheriting from a generated class and implement the behavior in overridden methods. For each application, the generator creates an abstract class with pure virtual methods. Those methods are callback functions, e.g., for handling telecommands or executing code in threads.
However, from the model, one can not foresee the software implementation by users. Therefore, as an innovation compared to other frameworks, Corfu introduces feedback from the user code back to the model. In this way, we extend the engineering model with information about functions/methods, their invocations, their stack usage, and information about events and telemetry emission. Indeed, it would be possible to add further information extraction for additional use cases. We extract the information in two ways: assembly and source code analysis. The assembly analysis collects information about the stack usage of functions and methods.
On the one side, Corfu uses the gathered information to accomplished additional verification steps, e.g., checking if stack usages exceed stack sizes of threads. On the other side, we use the gathered information to improve the performance of onboard software. In a use case, we show how the compiled binary and bandwidth towards the ground is reducible by exploiting source code information at run-time.
Produktionssysteme mit Industrierobotern werden zunehmend komplex; waren deren Arbeitsbereiche früher noch statisch und abgeschirmt, und die programmierten Abläufe gleichbleibend, so sind die Anforderungen an moderne Robotik-Produktionsanlagen gestiegen: Diese sollen sich jetzt mithilfe von intelligenter Sensorik auch in unstrukturierten Umgebungen einsetzen lassen, sich bei sinkenden Losgrößen aufgrund individualisierter Produkte und häufig ändernden Produktionsaufgaben leicht rekonfigurieren lassen, und sogar eine direkte Zusammenarbeit zwischen Mensch und Roboter ermöglichen. Gerade auch bei dieser Mensch-Roboter-Kollaboration wird es damit notwendig, dass der Mensch die Daten und Aktionen des Roboters leicht verstehen kann. Aufgrund der gestiegenen Anforderungen müssen somit auch die Bedienerschnittstellen dieser Systeme verbessert werden. Als Grundlage für diese neuen Benutzerschnittstellen bietet sich Augmented Reality (AR) als eine Technologie an, mit der sich komplexe räumliche Daten für den Bediener leicht verständlich darstellen lassen. Komplexe Informationen werden dabei in der Arbeitsumgebung der Nutzer visualisiert und als virtuelle Einblendungen sichtbar gemacht, und so auf einen Blick verständlich. Die diversen existierenden AR-Anzeigetechniken sind für verschiedene Anwendungsfelder unterschiedlich gut geeignet, und sollten daher flexibel kombinier- und einsetzbar sein. Auch sollen diese AR-Systeme schnell und einfach auf verschiedenartiger Hardware in den unterschiedlichen Arbeitsumgebungen in Betrieb genommen werden können. In dieser Arbeit wird ein Framework für Augmented Reality Systeme vorgestellt, mit dem sich die genannten Anforderungen umsetzen lassen, ohne dass dafür spezialisierte AR-Hardware notwendig wird. Das Flexible AR-Framework kombiniert und bündelt dafür verschiedene Softwarefunktionen für die grundlegenden AR-Anzeigeberechnungen, für die Kalibrierung der notwendigen Hardware, Algorithmen zur Umgebungserfassung mittels Structured Light sowie generische ARVisualisierungen und erlaubt es dadurch, verschiedene AR-Anzeigesysteme schnell und flexibel in Betrieb zu nehmen und parallel zu betreiben. Im ersten Teil der Arbeit werden Standard-Hardware für verschiedene AR-Visualisierungsformen sowie die notwendigen Algorithmen vorgestellt, um diese flexibel zu einem AR-System zu kombinieren. Dabei müssen die einzelnen verwendeten Geräte präzise kalibriert werden; hierfür werden verschiedene Möglichkeiten vorgestellt, und die mit ihnen dann erreichbaren typischen Anzeige- Genauigkeiten in einer Evaluation charakterisiert. Nach der Vorstellung der grundlegenden ARSysteme des Flexiblen AR-Frameworks wird dann eine Reihe von Anwendungen vorgestellt, bei denen das entwickelte System in konkreten Praxis-Realisierungen als AR-Benutzerschnittstelle zum Einsatz kam, unter anderem zur Überwachung von, Zusammenarbeit mit und einfachen Programmierung von Industrierobotern, aber auch zur Visualisierung von komplexen Sensordaten oder zur Fernwartung. Im Verlauf der Arbeit werden dadurch die Vorteile, die sich durch Verwendung der AR-Technologie in komplexen Produktionssystemen ergeben, herausgearbeitet und in Nutzerstudien belegt.
Combining Distributed Consensus with Robust H-infinity-Control for Satellite Formation Flying
(2019)
Control methods that guarantee stability in the presence of uncertainties are mandatory in space applications. Further, distributed control approaches are beneficial in terms of scalability and to achieve common goals, especially in multi-agent setups like formation control. This paper presents a combination of robust H-infinity control and distributed control using the consensus approach by deriving a distributed consensus-based generalized plant description that can be used in H-infinity synthesis. Special focus was set towards space applications, namely satellite formation flying. The presented results show the applicability of the developed distributed robust control method to a simple, though realistic space scenario, namely a spaceborne distributed telescope. By using this approach, an arbitrary number of satellites/agents can be controlled towards an arbitrary formation geometry. Because of the combination with robust H-infinity control, the presented method satisfies the high stability and robustness demands as found e.g., in space applications.
Das Management von Projekten, welche sowohl einmalige und interdisziplinäre Aufgabenstellungen als auch individuelle Rahmenbedingungen und Einschränkungen umfassen, stellt eine anspruchsvolle Aufgabe dar. Es gibt einige standardisierte Vorgehensmodelle, die einen organisatorischen Rahmen aus Phasen, Prozessen, Rollen und anzuwendenden Methoden anbieten.
Traditionellen Vorgehensmodellen wird in der Regel gefolgt, wenn die zu erzielenden Ergebnisse und der Ablauf eines Projektes auf Basis der zur Verfügung stehenden Informationen geplant werden können.
Agile Vorgehensmodelle werden vorranging genutzt, wenn keine ausreichenden Informationen zur Verfügung stehen, um eine vollständige Planung aufzusetzen. Ihr Fokus liegt darauf, flexibel auf sich ändernde Anforderungen einzugehen. Im direkten Austausch mit Kunden werden in meist mehreren aufeinander folgenden Zyklen Zwischenergebnisse bewertet und darauf basierend die jeweils nächsten Entwicklungsschritte geplant und umgesetzt.
Hybride Vorgehensmodelle werden genutzt, wenn Methoden aus mehreren unterschiedlichen Vorgehensmodellen erforderlich sind, um ein Projekt zu bearbeiten.
Die Bedeutung hybrider Vorgehensmodelle hat über die Jahre immer weiter zugenommen. Ihr besonderer Nutzen liegt darin, dass die Methodenauswahl auf den individuellen Kontext eines Projektes angepasst werden kann. Da es in der Praxis aber eine sehr große Anzahl an Methoden gibt, ist die Auswahl der zum Kontext passenden und deren Kombination zu einem individuellen Vorgehensmodell selbst für Experten/-innen eine Herausforderung. Die Forschungsergebnisse der vorliegenden Arbeit zeigen, dass es bisher auch kein Schema zur Unterstützung dieses Prozesses gab.
Um diese Forschungslücke zu schließen, wurde ein adaptives Referenzmodell für hybrides Projektmanagement (ARHP) entwickelt. Der wissenschaftliche Beitrag besteht zum einen in der Entwicklung eines Ablaufs zur Selektion und Kombination von zum Kontext passenden Methoden und zum anderen in der Umsetzung des Ablaufs als semi-automatisches Werkzeug. Referenzmodellnutzer/-innen können darin ihren individuellen Projektkontext durch die Auswahl zutreffender Kriterien (sogenannter Parameterausprägungen) erfassen. Das ARHP bietet ihnen dann ein Vorgehensmodell an, welches aus miteinander anwendbaren und verknüpfbaren Methoden besteht.
Da in der Projektmanagement Community häufig schnelle Entscheidungen für ein geeignetes Vorgehensmodell erforderlich sind und selbst Experten/-innen nicht alle Methoden kennen, wird der Nutzen der ''digitalen Beratung'', die das semi-automatische ARHP bietet, als hoch eingestuft.
Sowohl die für die Erfassung des Kontextes erforderlichen Parameter als auch die Methoden mit der höchsten Praxisrelevanz, wurden anhand einer umfangreichen Umfrage erforscht. Ihr wissenschaftlicher Beitrag besteht unter anderem in der erstmaligen Erfassung von Begründungen für die Verwendung von Methoden im Rahmen individueller, hybrider Vorgehensmodelle. Zudem erlauben die gesammelten Daten einen direkten Vergleich der Methodennutzung in funktionierenden und nicht funktionierenden hybriden Vorgehensmodellen.
Mit der so vorhandenen Datengrundlage wird in drei Design Science Research Zyklen ein Algorithmus entwickelt, der den Adaptionsmechanismus des ARHP bildet. Die Evaluation des ARHP erfolgt anhand des entwickelten semi-automatischen Prototypen unter Einbeziehung von Projektmanagementexperten/-innen.
Ausführungen zur Pflege des ARHP können als Handlungsanleitung für Referenzmodellkonstrukteure/-innen verstanden werden. Sie bilden den letzten Teil der Arbeit und zeigen, wie das ARHP kontinuierlich weiterentwickelt werden kann. Zudem wird ein Ausblick darauf gegeben, um welche Themen das ARHP im Rahmen weiterführender Forschung erweitert werden kann. Dabei handelt es sich zum Beispiel um eine noch stärkere Automatisierung und Empfehlungen für das Change Management, welche beide bereits in Vorbereitung sind.
This textbook provides an introduction to common methods of performance modeling and analysis of communication systems. These methods form the basis of traffic engineering, teletraffic theory, and analytical system dimensioning. The fundamentals of probability theory, stochastic processes, Markov processes, and embedded Markov chains are presented. Basic queueing models are described with applications in communication networks. Advanced methods are presented that have been frequently used in recent practice, especially discrete-time analysis algorithms, or which go beyond classical performance measures such as Quality of Experience or energy efficiency. Recent examples of modern communication networks include Software Defined Networking and the Internet of Things. Throughout the book, illustrative examples are used to provide practical experience in performance modeling and analysis.
Target group: The book is aimed at students and scientists in computer science and technical computer science, operations research, electrical engineering and economics.
Educational robotics is an innovative approach to teaching and learning a variety of different concepts and skills as well as motivating students in the field of Science, Technology, Engineering, and Mathematics (STEM) education. This especially applies to educational robotics competitions such as, for example, the FIRST LEGO League, the RoboCup Junior, or the World Robot Olympiad as out-of-school and goal-oriented approach to educational robotics. These competitions have gained greatly in popularity in recent years and thousands of students participate in these competitions worldwide each year. Moreover, the corresponding technology became more accessible for teachers and students to use it in their classrooms and has arguably a high potential to impact the nature of science education at all levels. One skill, which is said to be benefitting from educational robotics, is problem solving. This thesis understands problem solving skills as engineering design skills (in contrast to scientific inquiry). Problem solving skills count as important skills as demanded by industry leaders and policy makers in the context of 21st century skills, which are relevant for students to be well-prepared for their future working life in today’s world, shaped by an ongoing process of automation, globalization, and digitalization. The overall aim of this thesis is to try to answer the question if educational robotics competitions such as the World Robot Olympiad (WRO) have a positive impact on students’ learning in terms of their problem solving skills (as part of 21st century skills). In detail, this thesis focuses on a) if students can improve their problem solving skills through participation in educational robotics competitions, b) how this skill development is accomplished, and c) the teachers’ support of their students during their learning process in the competition. The corresponding empirical studies were conducted throughout the seasons of 2018 and 2019 of the WRO in Germany. The results show overall positive effects of the participation in the WRO on students’ learning of problem solving skills. They display an increase of students’ problem solving skills, which is not moderated by other variables such as the competition’s category or age group, the students’ gender or experience, or the success of the teams at the competition. Moreover, the results indicate that students develop their problem solving skills by using a systematic engineering design process and sophisticated problem solving strategies. Lastly, the teacher’s role in the educational robotics competitions as manager and guide (in terms of the constructionist learning theory) of the students’ learning process (especially regarding the affective level) is underlined by the results of this thesis. All in all, this thesis contributes to the research gap concerning the lack of systematic evaluation of educational robotics to promote students’ learning by providing more (methodologically) sophisticated research on this topic. Thereby, this thesis follows the call for more rigorous (quantitative) research by the educational robotics community, which is necessary to validate the impact of educational robotics.
In recent years, cloud gaming has become a popular research topic and has claimed many benefits in the commercial domain over conventional gaming. While, cloud gaming platforms have frequently failed in the past, they have received a new impetus over the last years that brought it to the edge of commercial breakthrough. The fragility of the cloud gaming market may be caused by the high investment costs, offered pricing models or competition from existing "à la carte" platforms. This paper aims at investigating the costs and benefits of both platform types through a twofold approach. We first take on the perspective of the customers, and investigate several cloud gaming platforms and their pricing models in comparison to the costs of other gaming platforms. Then, we explore engagement metrics in order to assess the enjoyment of playing the offered games. Lastly, coming from the perspective of the service providers, we aim to identify challenges in cost-effectively operating a large-scale cloud gaming service while maintaining high QoE values. Our analysis provides initial, yet still comprehensive reasons and models for the prospects of cloud gaming in a highly competitive market.
The recently published ITU-T Recommendation G1.032 proposes a list of factors that may influence cloud and online gaming Quality of Experience (QoE). This paper provides two practical evaluations of proposed system and context influence factors: First, it investigates through an online survey (n=488) the popularity of platforms, preferred ways of distribution, and motivational aspects including subjective valuations of characteristics offered by today's prevalent gaming platforms. Second, the paper evaluates a large dataset of objective metrics for various gaming platforms: game lists, playthrough lengths, prices, etc., and contrasts these metrics against the gamers' opinions. The combined data-driven approach presented in this paper complements in-person and lab studies usually employed.
The capabilities of small satellites have improved significantly in recent years. Specifically multi-satellite systems become increasingly popular, since they allow the support of new applications. The development and testing of these multi-satellite systems is a new challenge for engineers and requires the implementation of appropriate development and testing environments. In this paper, a modular network simulation framework for space–terrestrial systems is presented. It enables discrete event simulations for the development and testing of communication protocols, as well as mission-based analysis of other satellite system aspects, such as power supply and attitude control. ESTNeT is based on the discrete event simulator OMNeT++ and will be released under an open source license.
Due to biased assumptions on the underlying ordinal rating scale in subjective Quality of Experience (QoE) studies, Mean Opinion Score (MOS)-based evaluations provide results, which are hard to interpret and can be misleading. This paper proposes to consider the full QoE distribution for evaluating, reporting, and modeling QoE results instead of relying on MOS-based metrics derived from results based on ordinal rating scales. The QoE distribution can be represented in a concise way by using the parameters of a multinomial distribution without losing any information about the underlying QoE ratings, and even keeps backward compatibility with previous, biased MOS-based results. Considering QoE results as a realization of a multinomial distribution allows to rely on a well-established theoretical background, which enables meaningful evaluations also for ordinal rating scales. Moreover, QoE models based on QoE distributions keep detailed information from the results of a QoE study of a technical system, and thus, give an unprecedented richness of insights into the end users’ experience with the technical system. In this work, existing and novel statistical methods for QoE distributions are summarized and exemplary evaluations are outlined. Furthermore, using the novel concept of quality steps, simulative and analytical QoE models based on QoE distributions are presented and showcased. The goal is to demonstrate the fundamental advantages of considering QoE distributions over MOS-based evaluations if the underlying rating data is ordinal in nature.
Deriving QoE in systems: from fundamental relationships to a QoE-based Service-level Quality Index
(2020)
With Quality of Experience (QoE) research having made significant advances over the years, service and network providers aim at user-centric evaluation of the services provided in their system. The question arises how to derive QoE in systems. In the context of subjective user studies conducted to derive relationships between influence factors and QoE, user diversity leads to varying distributions of user rating scores for different test conditions. Such models are commonly exploited by providers to derive various QoE metrics in their system, such as expected QoE, or the percentage of users rating above a certain threshold. The question then becomes how to combine (a) user rating distributions obtained from subjective studies, and (b) system parameter distributions, so as to obtain the actual observed QoE distribution in the system? Moreover, how can various QoE metrics of interest in the system be derived? We prove fundamental relationships for the derivation of QoE in systems, thus providing an important link between the QoE community and the systems community. In our numerical examples, we focus mainly on QoE metrics. We furthermore provide a more generalized view on quantifying the quality of systems by defining a QoE-based Service-level Quality Index. This index exploits the fact that quality can be seen as a proxy measure for utility. Following the assumption that not all user sessions should be weighted equally, we aim to provide a generic framework that can be utilized to quantify the overall utility of a service delivered by a system.
Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a large scale. In general, most of these crowdsourcing studies either use, what we refer to, as an in vivo or an in vitro interface design. In vivo design means that the study participant has to rate the QoE of a video that is embedded in an application similar to a real streaming service, e.g., YouTube or Netflix. In vitro design refers to a setting, in which the video stream is separated from a specific service and thus, the video plays on a plain background. Although these interface designs vary widely, the results are often compared and generalized. In this work, we use a crowdsourcing study to investigate the influence of three interface design alternatives, an in vitro and two in vivo designs with different levels of interactiveness, on the perceived video QoE. Contrary to our expectations, the results indicate that there is no significant influence of the study’s interface design in general on the video experience. Furthermore, we found that the in vivo design does not reduce the test takers’ attentiveness. However, we observed that participants who interacted with the test interface reported a higher video QoE than other groups.
Purpose
Pronounced differences in individual physiological adaptation may occur following various training mesocycles in runners. Here we aimed to assess the individual changes in performance and physiological adaptation of recreational runners performing mesocycles with different intensity, duration and frequency.
Methods
Employing a randomized cross-over design, the intra-individual physiological responses [i.e., peak (\(\dot{VO}_{2peak}\)) and submaximal (\(\dot{VO}_{2submax}\)) oxygen uptake, velocity at lactate thresholds (V\(_2\), V\(_4\))] and performance (time-to-exhaustion (TTE)) of 13 recreational runners who performed three 3-week sessions of high-intensity interval training (HIIT), high-volume low-intensity training (HVLIT) or more but shorter sessions of HVLIT (high-frequency training; HFT) were assessed.
Results
\(\dot{VO}_{2submax}\), V\(_2\), V\(_4\) and TTE were not altered by HIIT, HVLIT or HFT (p > 0.05). \(\dot{VO}_{2peak}\) improved to the same extent following HVLIT (p = 0.045) and HFT (p = 0.02). The number of moderately negative responders was higher following HIIT (15.4%); and HFT (15.4%) than HVLIT (7.6%). The number of very positive responders was higher following HVLIT (38.5%) than HFT (23%) or HIIT (7.7%). 46% of the runners responded positively to two mesocycles, while 23% did not respond to any.
Conclusion
On a group level, none of the interventions altered \(\dot{VO}_{2submax}\), V\(_2\), V\(_4\) or TTE, while HVLIT and HFT improved \(\dot{VO}_{2peak}\). The mean adaptation index indicated similar numbers of positive, negative and non-responders to HIIT, HVLIT and HFT, but more very positive responders to HVLIT than HFT or HIIT. 46% responded positively to two mesocycles, while 23% did not respond to any. These findings indicate that the magnitude of responses to HIIT, HVLIT and HFT is highly individual and no pattern was apparent.
In the past two decades, there has been a trend to move from traditional television to Internet-based video services. With video streaming becoming one of the most popular applications in the Internet and the current state of the art in media consumption, quality expectations of consumers are increasing. Low quality videos are no longer considered acceptable in contrast to some years ago due to the increased sizes and resolution of devices. If the high expectations of the users are not met and a video is delivered in poor quality, they often abandon the service. Therefore, Internet Service Providers (ISPs) and video service providers are facing the challenge of providing seamless multimedia delivery in high quality. Currently, during peak hours, video streaming causes almost 58\% of the downstream traffic on the Internet. With higher mobile bandwidth, mobile video streaming has also become commonplace. According to the 2019 Cisco Visual Networking Index, in 2022 79% of mobile traffic will be video traffic and, according to Ericsson, by 2025 video is forecasted to make up 76% of total Internet traffic. Ericsson further predicts that in 2024 over 1.4 billion devices will be subscribed to 5G, which will offer a downlink data rate of 100 Mbit/s in dense urban environments.
One of the most important goals of ISPs and video service providers is for their users to have a high Quality of Experience (QoE). The QoE describes the degree of delight or annoyance a user experiences when using a service or application. In video streaming the QoE depends on how seamless a video is played and whether there are stalling events or quality degradations. These characteristics of a transmitted video are described as the application layer Quality of Service (QoS). In general, the QoS is defined as "the totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service" by the ITU. The network layer QoS describes the performance of the network and is decisive for the application layer QoS.
In Internet video, typically a buffer is used to store downloaded video segments to compensate for network fluctuations. If the buffer runs empty, stalling occurs. If the available bandwidth decreases temporarily, the video can still be played out from the buffer without interruption. There are different policies and parameters that determine how large the buffer is, at what buffer level to start the video, and at what buffer level to resume playout after stalling. These have to be finely tuned to achieve the highest QoE for the user. If the bandwidth decreases for a longer time period, a limited buffer will deplete and stalling can not be avoided. An important research question is how to configure the buffer optimally for different users and situations. In this work, we tackle this question using analytic models and measurement studies. With HTTP Adaptive Streaming (HAS), the video players have the capability to adapt the video bit rate at the client side according to the available network capacity. This way the depletion of the video buffer and thus stalling can be avoided. In HAS, the quality in which the video is played and the number of quality switches also has an impact on the QoE. Thus, an important problem is the adaptation of video streaming so that these parameters are optimized. In a shared WiFi multiple video users share a single bottleneck link and compete for bandwidth. In such a scenario, it is important that resources are allocated to users in a way that all can have a similar QoE. In this work, we therefore investigate the possible fairness gain when moving from network fairness towards application-layer QoS fairness. In mobile scenarios, the energy and data consumption of the user device are limited resources and they must be managed besides the QoE. Therefore, it is also necessary, to investigate solutions, that conserve these resources in mobile devices. But how can resources be conserved without sacrificing application layer QoS? As an example for such a solution, this work presents a new probabilistic adaptation algorithm that uses abandonment statistics for ts decision making, aiming at minimizing the resource consumption while maintaining high QoS.
With current protocol developments such as 5G, bandwidths are increasing, latencies are decreasing and networks are becoming more stable, leading to higher QoS. This allows for new real time data intensive applications such as cloud gaming, virtual reality and augmented reality applications to become feasible on mobile devices which pose completely new research questions. The high energy consumption of such applications still remains an issue as the energy capacity of devices is currently not increasing as quickly as the available data rates. In this work we compare the optimal performance of different strategies for adaptive 360-degree video streaming.
Latency is a key characteristic inherent to any computer system. Motion-to-Photon (MTP) latency describes the time between the movement of a tracked object and its corresponding movement rendered and depicted by computer-generated images on a graphical output screen. High MTP latency can cause a loss of performance in interactive graphics applications and, even worse, can provoke cybersickness in Virtual Reality (VR) applications. Here, cybersickness can degrade VR experiences or may render the experiences completely unusable. It can confound research findings of an otherwise sound experiment. Latency as a contributing factor to cybersickness needs to be properly understood. Its effects need to be analyzed, its sources need to be identified, good measurement methods need to be developed, and proper counter measures need to be developed in order to reduce potentially harmful impacts of latency on the usability and safety of VR systems. Research shows that latency can exhibit intricate timing patterns with various spiking and periodic behavior. These timing behaviors may vary, yet most are found to provoke cybersickness. Overall, latency can differ drastically between different systems interfering with generalization of measurement results. This review article describes the causes and effects of latency with regard to cybersickness. We report on different existing approaches to measure and report latency. Hence, the article provides readers with the knowledge to understand and report latency for their own applications, evaluations, and experiments. It should also help to measure, identify, and finally control and counteract latency and hence gain confidence into the soundness of empirical data collected by VR exposures. Low latency increases the usability and safety of VR systems.
The electric propulsion system NanoFEEP was integrated and tested in orbit on the UWE-4 satellite, which marks the first successful demonstration of an electric propulsion system on board a 1U CubeSat. In-orbit characterization measurements of the heating process of the propellant and the power consumption of the propulsion system at different thrust levels are presented. Furthermore, an analysis of the thrust vector direction based on its effect on the attitude of the spacecraft is described. The employed heater liquefies the propellant for a duration of 30 min per orbit and consumes 103 ± 4 mW. During this time, the respective thruster can be activated. The propulsion system including one thruster head, its corresponding heater, the neutralizer and the digital components of the power processing unit consume 8.5 ± 0.1 mW ⋅μ A\(^{−1}\) + 184 ± 8.5 mW and scales with the emitter current. The estimated thrust directions of two thruster heads are at angles of 15.7 ± 7.6∘ and 13.2 ± 5.5∘ relative to their mounting direction in the CubeSat structure. In light of the very limited power on a 1U CubeSat, the NanoFEEP propulsion system renders a very viable option. The heater of subsequent NanoFEEP thrusters was already improved, such that the system can be activated during the whole orbit period.
Aims Acute myocardial infarction (MI) is the major cause of chronic heart failure. The activity of blood coagulation factor XIII (FXIIIa) plays an important role in rodents as a healing factor after MI, whereas its role in healing and remodelling processes in humans remains unclear. We prospectively evaluated the relevance of FXIIIa after acute MI as a potential early prognostic marker for adequate healing.
Methods and results This monocentric prospective cohort study investigated cardiac remodelling in patients with ST-elevation MI and followed them up for 1 year. Serum FXIIIa was serially assessed during the first 9 days after MI and after 2, 6, and 12 months. Cardiac magnetic resonance imaging was performed within 4 days after MI (Scan 1), after 7 to 9 days (Scan 2), and after 12 months (Scan 3). The FXIII valine-to-leucine (V34L) single-nucleotide polymorphism rs5985 was genotyped. One hundred forty-six patients were investigated (mean age 58 ± 11 years, 13% women). Median FXIIIa was 118 % (quartiles, 102–132%) and dropped to a trough on the second day after MI: 109%(98–109%; P < 0.001). FXIIIa recovered slowly over time, reaching the baseline level after 2 to 6 months and surpassed baseline levels only after 12 months: 124 % (110–142%). The development of FXIIIa after MI was independent of the genotype. FXIIIa on Day 2 was strongly and inversely associated with the relative size of MI in Scan 1 (Spearman’s ρ = –0.31; P = 0.01) and Scan 3 (ρ = –0.39; P < 0.01) and positively associated with left ventricular ejection fraction: ρ = 0.32 (P < 0.01) and ρ = 0.24 (P = 0.04), respectively.
Conclusions FXIII activity after MI is highly dynamic, exhibiting a significant decline in the early healing period, with reconstitution 6 months later. Depressed FXIIIa early after MI predicted a greater size of MI and lower left ventricular ejection fraction after 1 year. The clinical relevance of these findings awaits to be tested in a randomized trial.
Dessert organisms like sandfish lizards (SLs) bend and generate thrust in granular mediums to scape heat and hunt for prey [1]. Further, SLs seems to have striking capabilities to swim in undulatory form keeping the same wavelength even in terrains with different volumetric densities, hence behaving as rigid bodies. This paper tries to recommend new research directions for planetary robotics, adapting principles of sand swimmers for improving robustness of surface exploration robots. First, we summarize previous efforts on bio-inspired hardware developed for granular terrains and accessing complex geological features. Later, a rigid wheel design has been proposed to imitate SLs locomotion capabilities. In order to derive the force models to predict performance of such bio-inspired mobility system, different approaches as RFT (Resistive Force Theory) and analytical terramechanics are introduced. Even in typical wheeled robots the slip and sinkage increase with time, the new design intends to imitate traversability capabilities of SLs, that seem to keep the same slip while displacing at subsurface levels.
An Overview of Design Patterns for Self-Adaptive Systems in the Context of the Internet of Things
(2020)
The Internet of Things (IoT) requires the integration of all available, highly specialized, and heterogeneous devices, ranging from embedded sensor nodes to servers in the cloud. The self-adaptive research domain provides adaptive capabilities that can support the integration in IoT systems. However, developing such systems is a challenging, error-prone, and time-consuming task. In this context, design patterns propose already used and optimized solutions to specific problems in various contexts. Applying design patterns might help to reuse existing knowledge about similar development issues. However, so far, there is a lack of taxonomies on design patterns for self-adaptive systems. To tackle this issue, in this paper, we provide a taxonomy on design patterns for self-adaptive systems that can be transferred to support adaptivity in IoT systems. Besides describing the taxonomy and the design patterns, we discuss their applicability in an Industrial IoT case study.
Over the last decades, cybersecurity has become an increasingly important issue. Between 2019 and 2011 alone, the losses from cyberattacks in the United States grew by 6217%. At the same time, attacks became not only more intensive but also more and more versatile and diverse. Cybersecurity has become everyone’s concern. Today, service providers require sophisticated and extensive security infrastructures comprising many security functions dedicated to various cyberattacks. Still, attacks become more violent to a level where infrastructures can no longer keep up. Simply scaling up is no longer sufficient. To address this challenge, in a whitepaper, the Cloud Security Alliance (CSA) proposed multiple work packages for security infrastructure, leveraging the possibilities of Software-defined Networking (SDN) and Network Function Virtualization (NFV).
Security functions require a more sophisticated modeling approach than regular network functions. Notably, the property to drop packets deemed malicious has a significant impact on Security Service Function Chains (SSFCs)—service chains consisting of multiple security functions to protect against multiple at- tack vectors. Under attack, the order of these chains influences the end-to-end system performance depending on the attack type. Unfortunately, it is hard to predict the attack composition at system design time. Thus, we make a case for dynamic attack-aware SSFC reordering. Also, we tackle the issues of the lack of integration between security functions and the surrounding network infrastructure, the insufficient use of short term CPU frequency boosting, and the lack of Intrusion Detection and Prevention Systems (IDPS) against database ransomware attacks.
Current works focus on characterizing the performance of security functions and their behavior under overload without considering the surrounding infrastructure. Other works aim at replacing security functions using network infrastructure features but do not consider integrating security functions within the network. Further publications deal with using SDN for security or how to deal with new vulnerabilities introduced through SDN. However, they do not take security function performance into account. NFV is a popular field for research dealing with frameworks, benchmarking methods, the combination with SDN, and implementing security functions as Virtualized Network
Functions (VNFs). Research in this area brought forth the concept of Service Function Chains (SFCs) that chain multiple network functions after one another. Nevertheless, they still do not consider the specifics of security functions. The mentioned CSA whitepaper proposes many valuable ideas but leaves their realization open to others.
This thesis presents solutions to increase the performance of single security functions using SDN, performance modeling, a framework for attack-aware SSFC reordering, a solution to make better use of CPU frequency boosting, and an IDPS against database ransomware.
Specifically, the primary contributions of this work are:
• We present approaches to dynamically bypass Intrusion Detection Systems (IDS) in order to increase their performance without reducing the security level. To this end, we develop and implement three SDN-based approaches (two dynamic and one static).
We evaluate the proposed approaches regarding security and performance and show that they significantly increase the performance com- pared to an inline IDS without significant security deficits. We show that using software switches can further increase the performance of the dynamic approaches up to a point where they can eliminate any throughput drawbacks when using the IDS.
• We design a DDoS Protection System (DPS) against TCP SYN flood at tacks in the form of a VNF that works inside an SDN-enabled network. This solution eliminates known scalability and performance drawbacks of existing solutions for this attack type.
Then, we evaluate this solution showing that it correctly handles the connection establishment and present solutions for an observed issue. Next, we evaluate the performance showing that our solution increases performance up to three times. Parallelization and parameter tuning yields another 76% performance boost. Based on these findings, we discuss optimal deployment strategies.
• We introduce the idea of attack-aware SSFC reordering and explain its impact in a theoretical scenario. Then, we discuss the required information to perform this process.
We validate our claim of the importance of the SSFC order by analyzing the behavior of single security functions and SSFCs. Based on the results, we conclude that there is a massive impact on the performance up to three orders of magnitude, and we find contradicting optimal orders
for different workloads. Thus, we demonstrate the need for dynamic reordering.
Last, we develop a model for SSFC regarding traffic composition and resource demands. We classify the traffic into multiple classes and model the effect of single security functions on the traffic and their generated resource demands as functions of the incoming network traffic. Based on our model, we propose three approaches to determine optimal orders for reordering.
• We implement a framework for attack-aware SSFC reordering based on this knowledge. The framework places all security functions inside an SDN-enabled network and reorders them using SDN flows.
Our evaluation shows that the framework can enforce all routes as desired. It correctly adapts to all attacks and returns to the original state after the attacks cease. We find possible security issues at the moment of reordering and present solutions to eliminate them.
• Next, we design and implement an approach to load balance servers while taking into account their ability to go into a state of Central Processing Unit (CPU) frequency boost. To this end, the approach collects temperature information from available hosts and places services on the host that can attain the boosted mode the longest.
We evaluate this approach and show its effectiveness. For high load scenarios, the approach increases the overall performance and the performance per watt. Even better results show up for low load workloads, where not only all performance metrics improve but also the temperatures and total power consumption decrease.
• Last, we design an IDPS protecting against database ransomware attacks that comprise multiple queries to attain their goal. Our solution models these attacks using a Colored Petri Net (CPN).
A proof-of-concept implementation shows that our approach is capable of detecting attacks without creating false positives for benign scenarios. Furthermore, our solution creates only a small performance impact.
Our contributions can help to improve the performance of security infrastructures. We see multiple application areas from data center operators over software and hardware developers to security and performance researchers. Most of the above-listed contributions found use in several research publications.
Regarding future work, we see the need to better integrate SDN-enabled security functions and SSFC reordering in data center networks. Future SSFC should discriminate between different traffic types, and security frameworks should support automatically learning models for security functions. We see the need to consider energy efficiency when regarding SSFCs and take CPU boosting technologies into account when designing performance models as well as placement, scaling, and deployment strategies. Last, for a faster adaptation against recent ransomware attacks, we propose machine-assisted learning for database IDPS signatures.
Computer games are highly immersive, engaging, and motivating learning environments. By providing a tutorial at the start of a new game, players learn the basics of the game's underlying principles as well as practice how to successfully play the game. During the actual gameplay, players repetitively apply this knowledge, thus improving it due to repetition. Computer games also challenge players with a constant stream of new challenges which increase in difficulty over time. As a result, computer games even require players to transfer their knowledge to master these new challenges. A computer game consists of several game mechanics. Game mechanics are the rules of a computer game and encode the game's underlying principles. They create the virtual environments, generate a game's challenges and allow players to interact with the game. Game mechanics also can encode real world knowledge. This knowledge may be acquired by players via gameplay. However, the actual process of knowledge encoding and knowledge learning using game mechanics has not been thoroughly defined, yet. This thesis therefore proposes a theoretical model to define the knowledge learning using game mechanics: the Gamified Knowledge Encoding. The model is applied to design a serious game for affine transformations, i.e., GEtiT, and to predict the learning outcome of playing a computer game that encodes orbital mechanics in its game mechanics, i.e., Kerbal Space Program. To assess the effects of different visualization technologies on the overall learning outcome, GEtiT visualizes the gameplay in desktop-3D and immersive virtual reality. The model's applicability for effective game design as well as GEtiT's overall design are evaluated in a usability study. The learning outcome of playing GEtiT and Kerbal Space Program is assessed in four additional user studies. The studies' results validate the use of the Gamified Knowledge Encoding for the purpose of developing effective serious games and to predict the learning outcome of existing serious games. GEtiT and Kerbal Space Program yield a similar training effect but a higher motivation to tackle the assignments in comparison to a traditional learning method. In conclusion, this thesis expands the understanding of using game mechanics for an effective learning of knowledge. The presented results are of high importance for researches, educators, and developers as they also provide guidelines for the development of effective serious games.
This thesis is divided into two parts.
In the first part we contribute to a working program initiated by Pudlák (2017) who lists several major complexity theoretic conjectures relevant to proof complexity and asks for oracles that separate pairs of corresponding relativized conjectures. Among these conjectures are:
- \(\mathsf{CON}\) and \(\mathsf{SAT}\): coNP (resp., NP) does not contain complete sets that have P-optimal proof systems.
- \(\mathsf{CON}^{\mathsf{N}}\): coNP does not contain complete sets that have optimal proof systems.
- \(\mathsf{TFNP}\): there do not exist complete total polynomial search problems (also known as total NP search problems).
- \(\mathsf{DisjNP}\) and \(\mathsf{DisjCoNP}\): There do not exist complete disjoint NP pairs (coNP pairs).
- \(\mathsf{UP}\): UP does not contain complete problems.
- \(\mathsf{NP}\cap\mathsf{coNP}\): \(\mathrm{NP}\cap\mathrm{coNP}\) does not contain complete problems.
- \(\mathrm{P}\ne\mathrm{NP}\).
We construct several of the oracles that Pudlák asks for.
In the second part we investigate the computational complexity of balance problems for \(\{-,\cdot\}\)-circuits computing finite sets of natural numbers (note that \(-\) denotes the set difference). These problems naturally build on problems for integer expressions and integer circuits studied by Stockmeyer and Meyer (1973), McKenzie and Wagner (2007), and Glaßer et al. (2010).
Our work shows that the balance problem for \(\{-,\cdot\}\)-circuits is undecidable which is the first natural problem for integer circuits or related constraint satisfaction problems that admits only one arithmetic operation and is proven to be undecidable.
Starting from this result we precisely characterize the complexity of balance problems for proper subsets of \(\{-,\cdot\}\). These problems turn out to be complete for one of the classes L, NL, and NP.
These days, we are living in a digitalized world. Both our professional and private lives are pervaded by various IT services, which are typically operated using distributed computing systems (e.g., cloud environments). Due to the high level of digitalization, the operators of such systems are confronted with fast-paced and changing requirements. In particular, cloud environments have to cope with load fluctuations and respective rapid and unexpected changes in the computing resource demands. To face this challenge, so-called auto-scalers, such as the threshold-based mechanism in Amazon Web Services EC2, can be employed to enable elastic scaling of the computing resources. However, despite this opportunity, business-critical applications are still run with highly overprovisioned resources to guarantee a stable and reliable service operation. This strategy is pursued due to the lack of trust in auto-scalers and the concern that inaccurate or delayed adaptations may result in financial losses.
To adapt the resource capacity in time, the future resource demands must be "foreseen", as reacting to changes once they are observed introduces an inherent delay. In other words, accurate forecasting methods are required to adapt systems proactively. A powerful approach in this context is time series forecasting, which is also applied in many other domains. The core idea is to examine past values and predict how these values will evolve as time progresses. According to the "No-Free-Lunch Theorem", there is no algorithm that performs best for all scenarios. Therefore, selecting a suitable forecasting method for a given use case is a crucial task. Simply put, each method has its benefits and drawbacks, depending on the specific use case. The choice of the forecasting method is usually based on expert knowledge, which cannot be fully automated, or on trial-and-error. In both cases, this is expensive and prone to error.
Although auto-scaling and time series forecasting are established research fields, existing approaches cannot fully address the mentioned challenges: (i) In our survey on time series forecasting, we found that publications on time series forecasting typically consider only a small set of (mostly related) methods and evaluate their performance on a small number of time series with only a few error measures while providing no information on the execution time of the studied methods. Therefore, such articles cannot be used to guide the choice of an appropriate method for a particular use case; (ii) Existing open-source hybrid forecasting methods that take advantage of at least two methods to tackle the "No-Free-Lunch Theorem" are computationally intensive, poorly automated, designed for a particular data set, or they lack a predictable time-to-result. Methods exhibiting a high variance in the time-to-result cannot be applied for time-critical scenarios (e.g., auto-scaling), while methods tailored to a specific data set introduce restrictions on the possible use cases (e.g., forecasting only annual time series); (iii) Auto-scalers typically scale an application either proactively or reactively. Even though some hybrid auto-scalers exist, they lack sophisticated solutions to combine reactive and proactive scaling. For instance, resources are only released proactively while resource allocation is entirely done in a reactive manner (inherently delayed); (iv) The majority of existing mechanisms do not take the provider's pricing scheme into account while scaling an application in a public cloud environment, which often results in excessive charged costs. Even though some cost-aware auto-scalers have been proposed, they only consider the current resource demands, neglecting their development over time. For example, resources are often shut down prematurely, even though they might be required again soon.
To address the mentioned challenges and the shortcomings of existing work, this thesis presents three contributions: (i) The first contribution-a forecasting benchmark-addresses the problem of limited comparability between existing forecasting methods; (ii) The second contribution-Telescope-provides an automated hybrid time series forecasting method addressing the challenge posed by the "No-Free-Lunch Theorem"; (iii) The third contribution-Chamulteon-provides a novel hybrid auto-scaler for coordinated scaling of applications comprising multiple services, leveraging Telescope to forecast the workload intensity as a basis for proactive resource provisioning. In the following, the three contributions of the thesis are summarized:
Contribution I - Forecasting Benchmark
To establish a level playing field for evaluating the performance of forecasting methods in a broad setting, we propose a novel benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Besides proposing a new data set, we introduce two new measures that describe different aspects of a forecast. We applied the developed benchmark to evaluate Telescope.
Contribution II - Telescope
To provide a generic forecasting method, we introduce a novel machine learning-based forecasting approach that automatically retrieves relevant information from a given time series. More precisely, Telescope automatically extracts intrinsic time series features and then decomposes the time series into components, building a forecasting model for each of them. Each component is forecast by applying a different method and then the final forecast is assembled from the forecast components by employing a regression-based machine learning algorithm. In more than 1300 hours of experiments benchmarking 15 competing methods (including approaches from Uber and Facebook) on 400 time series, Telescope outperformed all methods, exhibiting the best forecast accuracy coupled with a low and reliable time-to-result. Compared to the competing methods that exhibited, on average, a forecast error (more precisely, the symmetric mean absolute forecast error) of 29%, Telescope exhibited an error of 20% while being 2556 times faster. In particular, the methods from Uber and Facebook exhibited an error of 48% and 36%, and were 7334 and 19 times slower than Telescope, respectively.
Contribution III - Chamulteon
To enable reliable auto-scaling, we present a hybrid auto-scaler that combines proactive and reactive techniques to scale distributed cloud applications comprising multiple services in a coordinated and cost-effective manner. More precisely, proactive adaptations are planned based on forecasts of Telescope, while reactive adaptations are triggered based on actual observations of the monitored load intensity. To solve occurring conflicts between reactive and proactive adaptations, a complex conflict resolution algorithm is implemented. Moreover, when deployed in public cloud environments, Chamulteon reviews adaptations with respect to the cloud provider's pricing scheme in order to minimize the charged costs. In more than 400 hours of experiments evaluating five competing auto-scaling mechanisms in scenarios covering five different workloads, four different applications, and three different cloud environments, Chamulteon exhibited the best auto-scaling performance and reliability while at the same time reducing the charged costs. The competing methods provided insufficient resources for (on average) 31% of the experimental time; in contrast, Chamulteon cut this time to 8% and the SLO (service level objective) violations from 18% to 6% while using up to 15% less resources and reducing the charged costs by up to 45%.
The contributions of this thesis can be seen as major milestones in the domain of time series forecasting and cloud resource management. (i) This thesis is the first to present a forecasting benchmark that covers a variety of different domains with a high diversity between the analyzed time series. Based on the provided data set and the automatic evaluation procedure, the proposed benchmark contributes to enhance the comparability of forecasting methods. The benchmarking results for different forecasting methods enable the selection of the most appropriate forecasting method for a given use case. (ii) Telescope provides the first generic and fully automated time series forecasting approach that delivers both accurate and reliable forecasts while making no assumptions about the analyzed time series. Hence, it eliminates the need for expensive, time-consuming, and error-prone procedures, such as trial-and-error searches or consulting an expert. This opens up new possibilities especially in time-critical scenarios, where Telescope can provide accurate forecasts with a short and reliable time-to-result.
Although Telescope was applied for this thesis in the field of cloud computing, there is absolutely no limitation regarding the applicability of Telescope in other domains, as demonstrated in the evaluation. Moreover, Telescope, which was made available on GitHub, is already used in a number of interdisciplinary data science projects, for instance, predictive maintenance in an Industry 4.0 context, heart failure prediction in medicine, or as a component of predictive models of beehive development. (iii) In the context of cloud resource management, Chamulteon is a major milestone for increasing the trust in cloud auto-scalers. The complex resolution algorithm enables reliable and accurate scaling behavior that reduces losses caused by excessive resource allocation or SLO violations. In other words, Chamulteon provides reliable online adaptations minimizing charged costs while at the same time maximizing user experience.
E-Mails, Online Banking und Videokonferenzen sind aus unserem heutigen Alltag nicht mehr wegzudenken. Bei all diesen Aktivitäten werden zahlreiche personenbezogene Informationen und vertrauenswürdige Daten digital übertragen und gespeichert. Zur Sicherstellung der digitalen Daten vor unbefugten Zugriffen und Manipulationen existieren verschiedenste Konzepte, Methoden und Verfahren, die sich unter dem Begriff IT-Sicherheit zusammenfassen lassen. Klassische Sicherheitslösungen aus dem Bereich IT-Sicherheit sind Firewalls und Virenscanner. Derartige Ansätze sind meist regelbasiert und prüfen Dateien beziehungsweise eingehenden Netzwerkverkehr anhand einer Liste bekannter Angriffssignaturen. Folglich können diese Systeme nur bereits bekannte Angriffsszenarien detektieren und bieten keinen Schutz vor neuartigen Angriffen. Somit entsteht im Bereich IT-Sicherheit ein Wettlauf zwischen Hackern und IT-Sicherheitsexperten, bei dem die Hacker stets nach neuen Mitteln und Wegen suchen, die existierenden Sicherheitslösungen zu überwinden, während IT-Sicherheitsexperten stetig ihre Schutzmechanismen verbessern.
Die vorliegende Arbeit widmet sich der Detektion von Angriffsszenarien in Unternehmensnetzwerken mithilfe von Data Mining-Methoden. Diese Methoden sind in der Lage anhand von repräsentativen Daten die darin enthaltenen Strukturen zu erlernen und zu generalisieren. Folglich können sich Data Mining-Methoden grundsätzlich zur Detektion neuer Angriffsszenarien eignen, wenn diese Angriffsszenarien Überschneidungen mit bekannten Angriffsszenarien aufweisen oder sich wesentlich vom bekannten Normalverhalten unterscheiden. In dieser Arbeit werden netzwerkbasierte Daten im NetFlow Format analysiert, da diese einen aggregierten Überblick über das Geschehen im Netzwerk bieten. Häufig können Netzwerkdaten aufgrund datenschutzrechtlicher Bedenken nicht veröffentlicht werden, was für die Erzeugung synthetischer, aber realistischer Netzwerkdaten spricht. Des Weiteren führt die Beschaffenheit der Netzwerkdaten dazu, dass eine Kombination von kontinuierlichen und kategorischen Attributen analysiert werden muss, was vor allem das Vergleichen der Daten bezüglich ihrer Ähnlichkeit erschwert.
Diese Arbeit liefert methodische Beiträge zu jeder der drei genannten Herausforderungen. Im Bereich der Abstandsberechnung kategorischer Werte werden mit ConDist und IP2Vec zwei unterschiedliche Ansätze entwickelt. ConDist ist ein universell einsetzbares Abstandsmaß zur Berechnung von Abständen zwischen Datenpunkten, die aus kontinuierlichen und kategorischen Attributen bestehen. IP2Vec ist auf Netzwerkdaten spezialisiert und transformiert kategorische Werte in kontinuierliche Vektoren.
Im Bereich der Generierung realistischer Netzwerkdaten werden neben einer ausführlichen Literaturrecherche zwei unterschiedliche Ansätze vorgestellt. Zunächst wird ein auf Simulation basierter Ansatz zur Generierung flowbasierter Datensätze entwickelt. Dieser Ansatz basiert auf einer Testumgebung und simuliert typische Benutzeraktivitäten durch automatisierte Python Skripte. Parallel hierzu wird ein zweiter Ansatz zur synthetischen Generierung flowbasierter Netzwerkdaten durch Modellierung mithilfe von Generative Adversarial Networks entwickelt. Dieser Ansatz erlernt die zugrundeliegenden Eigenschaften der Netzwerkdaten und ist anschließend in der Lage, neue Netzwerkdaten mit gleichen Eigenschaften zu generieren.Während sich der erste Ansatz zur Erstellung neuer Datensätze eignet, kann der zweite Ansatz zur Anreicherung existierender Datensätze genutzt werden.
Schließlich liefert diese Arbeit noch zwei Beiträge zur Detektion von Angriffsszenarien. Im ersten Beitrag wird ein Konzept zur Detektion von Angriffsszenarien entwickelt, welches sich an die typischen Phasen eines Angriffsszenarios orientiert. Im zweiten Beitrag werden eine überwachte und eine unüberwachte Methode zur Detektion von langsamen Port Scans vorgestellt.
Purpose: A study of real-time adaptive radiotherapy systems was performed to test the hypothesis that, across delivery systems and institutions, the dosimetric accuracy is improved with adaptive treatments over non-adaptive radiotherapy in the presence of patient-measured tumor motion. Methods and materials: Ten institutions with robotic(2), gimbaled(2), MLC(4) or couch tracking(2) used common materials including CT and structure sets, motion traces and planning protocols to create a lung and a prostate plan. For each motion trace, the plan was delivered twice to a moving dosimeter; with and without real-time adaptation. Each measurement was compared to a static measurement and the percentage of failed points for gamma-tests recorded. Results: For all lung traces all measurement sets show improved dose accuracy with a mean 2%/2 mm gamma-fail rate of 1.6% with adaptation and 15.2% without adaptation (p < 0.001). For all prostate the mean 2%/2 mm gamma-fail rate was 1.4% with adaptation and 17.3% without adaptation (p < 0.001). The difference between the four systems was small with an average 2%/2 mm gamma-fail rate of <3% for all systems with adaptation for lung and prostate. Conclusions: The investigated systems all accounted for realistic tumor motion accurately and performed to a similar high standard, with real-time adaptation significantly outperforming non-adaptive delivery methods.
Affordable prices for 3D laser range finders and mature software solutions for registering multiple point clouds in a common coordinate system paved the way for new areas of application for 3D point clouds. Nowadays we see 3D laser scanners being used not only by digital surveying experts but also by law enforcement officials, construction workers or archaeologists. Whether the purpose is digitizing factory production lines, preserving historic sites as digital heritage or recording environments for gaming or virtual reality applications -- it is hard to imagine a scenario in which the final point cloud must also contain the points of "moving" objects like factory workers, pedestrians, cars or flocks of birds. For most post-processing tasks, moving objects are undesirable not least because moving objects will appear in scans multiple times or are distorted due to their motion relative to the scanner rotation.
The main contributions of this work are two postprocessing steps for already registered 3D point clouds. The first method is a new change detection approach based on a voxel grid which allows partitioning the input points into static and dynamic points using explicit change detection and subsequently remove the latter for a "cleaned" point cloud. The second method uses this cleaned point cloud as input for detecting collisions between points of the environment point cloud and a point cloud of a model that is moved through the scene.
Our approach on explicit change detection is compared to the state of the art using multiple datasets including the popular KITTI dataset. We show how our solution achieves similar or better F1-scores than an existing solution while at the same time being faster.
To detect collisions we do not produce a mesh but approximate the raw point cloud data by spheres or cylindrical volumes. We show how our data structures allow efficient nearest neighbor queries that make our CPU-only approach comparable to a massively-parallel algorithm running on a GPU. The utilized algorithms and data structures are discussed in detail. All our software is freely available for download under the terms of the GNU General Public license. Most of the datasets used in this thesis are freely available as well. We provide shell scripts that allow one to directly reproduce the quantitative results shown in this thesis for easy verification of our findings.
Im Rahmen des BMBF-geförderten Projekts KALLIMACHOS an der Universität Würzburg soll unter anderem die Textgrundlage für digitale Editionen per OCR gewonnen werden. Das Bearbeitungskorpus besteht aus deutschen, französischen und lateinischen Inkunabeln. Dieser Artikel zeigt, wie man mit bereits heute existierenden Methoden und Programmen den Problemen bei der OCR von Inkunabeln entgegentreten kann. Hierzu wurde an der Universitätsbibliothek Würzburg ein Verfahren erprobt, mit dem auf ausgewählten Werken einer Druckerwerkstatt bereits Zeichengenauigkeiten von bis zu 95 Prozent und Wortgenauigkeiten von bis zu 73 Prozent erzielt werden.
Nowadays, employees have to work with applications, technical services, and systems every day for hours. Hence, performance degradation of such systems might be perceived negatively by the employees, increase frustration, and might also have a negative effect on their productivity. The assessment of the application's performance in order to provide a smooth operation of the application is part of the application management. Within this process it is not sufficient to assess the system performance solely on technical performance parameters, e.g., response or loading times. These values have to be set into relation to the perceived performance quality on the user's side - the quality of experience (QoE).
This dissertation focuses on the monitoring and estimation of the QoE of enterprise applications. As building models to estimate the QoE requires quality ratings from the users as ground truth, one part of this work addresses methods to collect such ratings. Besides the evaluation of approaches to improve the quality of results of tasks and studies completed on crowdsourcing platforms, a general concept for monitoring and estimating QoE in enterprise environments is presented. Here, relevant design dimension of subjective studies are identified and their impact of the QoE is evaluated and discussed. By considering the findings, a methodology for collecting quality ratings from employees during their regular work is developed. The method is realized by implementing a tool to conduct short surveys and deployed in a cooperating company.
As a foundation for learning QoE estimation models, this work investigates the relationship between user-provided ratings and technical performance parameters. This analysis is based on a data set collected in a user study in a cooperating company during a time span of 1.5 years. Finally, two QoE estimation models are introduced and their performance is evaluated.
Time-triggered communication is widely used throughout several industry do-
mains, primarily for reliable and real-time capable data transfers. However,
existing time-triggered technologies are designed for terrestrial usage and not
directly applicable to space applications due to the harsh environment. In-
stead, specific hardware must be developed to deal with thermal, mechanical,
and especially radiation effects.
SpaceWire, as an event-triggered communication technology, has been used
for years in a large number of space missions. Its moderate complexity, her-
itage, and transmission rates up to 400 MBits/s are one of the main ad-
vantages and often without alternatives for on-board computing systems of
spacecraft. At present, real-time data transfers are either achieved by prior-
itization inside SpaceWire routers or by applying a simplified time-triggered
approach. These solutions either imply problems if they are used inside dis-
tributed on-board computing systems or in case of networks with more than
a single router are required.
This work provides a solution for the real-time problem by developing
a novel clock synchronization approach. This approach is focused on being
compatible with distributed system structures and allows time-triggered data
transfers. A significant difference to existing technologies is the remote clock
estimation by the use of pulses. They are transferred over the network and
remove the need for latency accumulation, which allows the incorporation of
standardized SpaceWire equipment. Additionally, local clocks are controlled
decentralized and provide different correction capabilities in order to handle
oscillator induced uncertainties. All these functionalities are provided by a developed Network Controller (NC), able to isolate the attached network and
to control accesses.
Reliable, deterministic real-time communication is fundamental to most industrial systems today. In many other domains Ethernet has become the most common platform for communication networks, but has been unsuitable to satisfy the requirements of industrial networks for a long time. This has changed with the introduction of Time-Sensitive-Networking (TSN), a set of standards utilizing Ethernet to implement deterministic real-time networks. This makes Ethernet a viable alternative to the expensive fieldbus systems commonly used in industrial environments. However, TSN is not a silver bullet. Industrial networks are a complex and highly dynamic environment and the configuration of TSN, especially with respect to latency, is a challenging but crucial task.
Various approaches have been pursued for the configuration of TSN in dynamic industrial environments. Optimization techniques like Linear Programming (LP) are able to determine an optimal configuration for a given network, but the time consumption exponentially increases with the complexity of the environment. Machine Learning (ML) has become widely popular in the last years and is able to approximate a near-optimal TSN configuration for networks of different complexity. Yet, ML models are usually trained in a supervised manner which requires large amounts of data that have to be generated for the specific environment. Therefore, supervised methods are not scalable and do not adapt to changing dynamics of the network environment.
To address these issues, this work proposes a Deep Reinforcement Learning (DRL) approach to the configuration of TSN in industrial networks. DRL combines two different disciplines, Deep Learning (DL) and Reinforcement Learning (RL), and has gained considerable traction in the last years due to breakthroughs in various domains. RL is supposed to autonomously learn a challenging task like the configuration of TSN without requiring any training data. The addition of DL allows to apply well-studied RL methods to a complex environment such as dynamic industrial networks.
There are two major contributions made in this work. In the first step, an interactive environment is proposed which allows for the simulation and configuration of industrial networks using basic TSN mechanisms. The environment provides an interface that allows to apply various DRL methods to the problem of TSN configuration. The second contribution of this work is an in-depth study on the application of two fundamentally different DRL methods to the proposed environment. Both methods are evaluated on networks of different complexity and the results are compared to the ground truth and to the results of two supervised ML approaches. Ultimately, this work investigates if DRL can adapt to changing dynamics of the environment in a more scalable manner than supervised methods.
In recent years, great progress has been made in the area of Artificial Intelligence (AI) due to the possibilities of Deep Learning which steadily yielded new state-of-the-art results especially in many image recognition tasks.
Currently, in some areas, human performance is achieved or already exceeded.
This great development already had an impact on the area of Optical Music Recognition (OMR) as several novel methods relying on Deep Learning succeeded in specific tasks.
Musicologists are interested in large-scale musical analysis and in publishing digital transcriptions in a collection enabling to develop tools for searching and data retrieving.
The application of OMR promises to simplify and thus speed-up the transcription process by either providing fully-automatic or semi-automatic approaches.
This thesis focuses on the automatic transcription of Medieval music with a focus on square notation which poses a challenging task due to complex layouts, highly varying handwritten notations, and degradation.
However, since handwritten music notations are quite complex to read, even for an experienced musicologist, it is to be expected that even with new techniques of OMR manual corrections are required to obtain the transcriptions.
This thesis presents several new approaches and open source software solutions for layout analysis and Automatic Text Recognition (ATR) for early documents and for OMR of Medieval manuscripts providing state-of-the-art technology.
Fully Convolutional Networks (FCN) are applied for the segmentation of historical manuscripts and early printed books, to detect staff lines, and to recognize neume notations.
The ATR engine Calamari is presented which allows for ATR of early prints and also the recognition of lyrics.
Configurable CNN/LSTM-network architectures which are trained with the segmentation-free CTC-loss are applied to the sequential recognition of text but also monophonic music.
Finally, a syllable-to-neume assignment algorithm is presented which represents the final step to obtain a complete transcription of the music.
The evaluations show that the performances of any algorithm is highly depending on the material at hand and the number of training instances.
The presented staff line detection correctly identifies staff lines and staves with an $F_1$-score of above $99.5\%$.
The symbol recognition yields a diplomatic Symbol Accuracy Rate (dSAR) of above $90\%$ by counting the number of correct predictions in the symbols sequence normalized by its length.
The ATR of lyrics achieved a Character Error Rate (CAR) (equivalently the number of correct predictions normalized by the sentence length) of above $93\%$ trained on 771 lyric lines of Medieval manuscripts and of 99.89\% when training on around 3.5 million lines of contemporary printed fonts.
The assignment of syllables and their corresponding neumes reached $F_1$-scores of up to $99.2\%$.
A direct comparison to previously published performances is difficult due to different materials and metrics.
However, estimations show that the reported values of this thesis exceed the state-of-the-art in the area of square notation.
A further goal of this thesis is to enable musicologists without technical background to apply the developed algorithms in a complete workflow by providing a user-friendly and comfortable Graphical User Interface (GUI) encapsulating the technical details.
For this purpose, this thesis presents the web-application OMMR4all.
Its fully-functional workflow includes the proposed state-of-the-art machine-learning algorithms and optionally allows for a manual intervention at any stage to correct the output preventing error propagation.
To simplify the manual (post-) correction, OMMR4all provides an overlay-editor that superimposes the annotations with a scan of the original manuscripts so that errors can easily be spotted.
The workflow is designed to be iteratively improvable by training better models as soon as new Ground Truth (GT) is available.
An Intelligent Semi-Automatic Workflow for Optical Character Recognition of Historical Printings
(2020)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years great progress has been made in the area of historical OCR resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, Automatic Text Recognition (ATR) and postcorrection. Their major drawback is that they only offer limited applicability by non-technical users like humanist scholars, in particular when it comes to the combined use of several tools in a workflow. Furthermore, depending on the material, these tools are usually not able to fully automatically achieve sufficiently low error rates, let alone perfect results, creating a demand for an interactive postcorrection functionality which, however, is generally not incorporated.
This thesis addresses these issues by presenting an open-source OCR software called OCR4all which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required Ground Truth (GT) for training stronger mixed models (for segmentation as well as text recognition) is not available, yet, neither in the desired quantity nor quality.
To deal with this issue in the short run, OCR4all offers better recognition capabilities in combination with a very comfortable Graphical User Interface (GUI) that allows error corrections not only in the final output, but already in early stages to minimize error propagation. In the long run this constant manual correction produces large quantities of valuable, high quality training material which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. The architecture of OCR4all allows for an easy integration (or substitution) of newly developed tools for its main components by supporting standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
In addition to OCR4all, several methodical extensions in the form of accuracy improving techniques for training and recognition are presented. Most notably an effective, sophisticated, and adaptable voting methodology using a single ATR engine, a pretraining procedure, and an Active Learning (AL) component are proposed. Experiments showed that combining pretraining and voting significantly improves the effectiveness of book-specific training, reducing the obtained Character Error Rates (CERs) by more than 50%.
The proposed extensions were further evaluated during two real world case studies: First, the voting and pretraining techniques are transferred to the task of constructing so-called mixed models which are trained on a variety of different fonts. This was done by using 19th century Fraktur script as an example, resulting in a considerable improvement over a variety of existing open-source and commercial engines and models. Second, the extension from ATR on raw text to the adjacent topic of typography recognition was successfully addressed by thoroughly indexing a historical lexicon that heavily relies on different font types in order to encode its complex semantic structure.
During the main experiments on very complex early printed books even users with minimal or no experience were able to not only comfortably deal with the challenges presented by the complex layout, but also to recognize the text with manageable effort and great quality, achieving excellent CERs below 0.5%. Furthermore, the fully automated application on 19th century novels showed that OCR4all (average CER of 0.85%) can considerably outperform the commercial state-of-the-art tool ABBYY Finereader (5.3%) on moderate layouts if suitably pretrained mixed ATR models are available.
Recent advances in Natural Language Preprocessing (NLP) allow for a fully automatic extraction of character networks for an incoming text. These networks serve as a compact and easy to grasp representation of literary fiction. They offer an aggregated view of the text, which can be used during distant reading approaches for the analysis of literary hypotheses. In their core, the networks consist of nodes, which represent literary characters, and edges, which represent relations between characters. For an automatic extraction of such a network, the first step is the detection of the references of all fictional entities that are of importance for a text. References to the fictional entities appear in the form of names, noun phrases and pronouns and prior to this work, no components capable of automatic detection of character references were available. Existing tools are only capable of detecting proper nouns, a subset of all character references. When evaluated on the task of detecting proper nouns in the domain of literary fiction, they still underperform at an F1-score of just about 50%. This thesis uses techniques from the field of semi-supervised learning, such as Distant supervision and Generalized Expectations, and improves the results of an existing tool to about 82%, when evaluated on all three categories in literary fiction, but without the need for annotated data in the target domain. However, since this quality is still not sufficient, the decision to annotate DROC, a corpus comprising 90 fragments of German novels was made. This resulted in a new general purpose annotation environment titled as ATHEN, as well as annotated data that spans about 500.000 tokens in total. Using this data, the combination of supervised algorithms and a tailored rule based algorithm, which in combination are able to exploit both - local consistencies as well as global consistencies - yield an algorithm with an F1-score of about 93%. This component is referred to as the Kallimachos tagger.
A character network can not directly display references however, instead they need to be clustered so that all references that belong to a real world or fictional entity are grouped together. This process widely known as coreference resolution is a hard problem in the focus of research for more than half a century. This work experimented with adaptations of classical feature based machine learning, with a dedicated rule based algorithm and with modern techniques of Deep Learning, but no approach can surpass 55% B-Cubed F1, when evaluated on DROC. Due to this barrier, many researchers do not use a fully-fledged coreference resolution when they extract character networks, but only focus on a more forgiving subset- the names. For novels such as Alice's Adventures in Wonderland by Lewis Caroll, this would however only result in a network in which many important characters are missing. In order to integrate important characters into the network that are not named by the author, this work makes use of automatic detection of speaker and addressees for direct speech utterances (all entities involved in a dialog are considered to be of importance). This problem is by itself not an easy task, however the most successful system analysed in this thesis is able to correctly determine the speaker to about 85% of the utterances as well as about 65% of the addressees. This speaker information can not only help to identify the most dominant characters, but also serves as a way to model the relations between entities.
During the span of this work, components have been developed to model relations between characters using speaker attribution, using co-occurrences as well as by the usage of true interactions, for which yet again a dataset was annotated using ATHEN. Furthermore, since relations between characters are usually typed, a component for the extraction of a typed relation was developed. Similar to the experiments for the character reference detection, a combination of a rule based and a Maximum Entropy classifier yielded the best overall results, with the extraction of family relations showing a score of about 80% and the quality of love relations with a score of about 50%. For family relations, a kernel for a Support Vector Machine was developed that even exceeded the scores of the combined approach but is behind on the other labels.
In addition, this work presents new ways to evaluate automatically extracted networks without the need of domain experts, instead it relies on the usage of expert summaries. It also refrains from the uses of social network analysis for the evaluation, but instead presents ranked evaluations using Precision@k and the Spearman Rank correlation coefficient for the evaluation of the nodes and edges of the network. An analysis using these metrics showed, that the central characters of a novel are contained with high probability but the quality drops rather fast if more than five entities are analyzed. The quality of the edges is mainly dominated by the quality of the coreference resolution and the correlation coefficient between gold edges and system edges therefore varies between 30 and 60%.
All developed components are aggregated alongside a large set of other preprocessing modules in the Kallimachos pipeline and can be reused without any restrictions.
The DFG project “SDN-enabled Application-aware Network Control Architectures and their Performance Assessment” (DFG SDN-App) focused in phase 1 (Jan 2017 – Dec 2019) on software defined networking (SDN). Being a fundamental paradigm shift, SDN enables a remote control of networking devices made by different vendors from a logically centralized controller. In principle, this enables a more dynamic and flexible management of network resources compared to the traditional legacy networks. Phase 1 focused on multimedia applications and their users’ Quality of Experience (QoE).
This documents reports the achievements of the first phase (Jan 2017 – Dec 2019), which is jointly carried out by the Technical University of Munich, Technical University of Berlin, and University of Würzburg. The project started at the institutions in Munich and Würzburg in January 2017 and lasted until December 2019.
In Phase 1, the project targeted the development of fundamental control mechanisms for network-aware application control and application-aware network control in Software Defined Networks (SDN) so to enhance the user perceived quality (QoE). The idea is to leverage the QoE from multiple applications as control input parameter for application-and network control mechanisms. These mechanisms are implemented by an Application Control Plane (ACP) and a Network Control Plane (NCP). In order to obtain a global view of the current system state, applications and network parameters are monitored and communicated to the respective control plane interface. Network and application information and their demands are exchanged between the control planes so to derive appropriate control actions. To this end, a methodology is developed to assess the application performance and in particular the QoE. This requires an appropriate QoE modeling of the applications considered in the project as well as metrics like QoE fairness to be utilized within QoE management.
In summary, the application-network interaction can improve the QoE for multi-application scenarios. This is ensured by utilizing information from the application layer, which are mapped by appropriate QoS-QoE models to QoE within a network control plane. On the other hand, network information is monitored and communicated to the application control plane. Network and application information and their demands are exchanged between the control planes so to derive appropriate control actions.
Von technischen Systemen wird in der heutigen Zeit erwartet, dass diese stets fehlerfrei funktionieren, um einen reibungslosen Ablauf des Alltags zu gewährleisten. Technische Systeme jedoch können Defekte aufweisen, die deren Funktionsweise einschränken oder zu deren Totalausfall führen können. Grundsätzlich zeigen sich Defekte durch eine Veränderung im Verhalten von einzelnen Komponenten. Diese Abweichungen vom Nominalverhalten nehmen dabei an Intensität zu, je näher die entsprechende Komponente an einem Totalausfall ist. Aus diesem Grund sollte das Fehlverhalten von Komponenten rechtzeitig erkannt werden, um permanenten Schaden zu verhindern. Von besonderer Bedeutung ist dies für die Luft- und Raumfahrt. Bei einem Satelliten kann keine Wartung seiner Komponenten durchgeführt werden, wenn er sich bereits im Orbit befindet. Der Defekt einer einzelnen Komponente, wie der Batterie der Energieversorgung, kann hierbei den Verlust der gesamten Mission bedeuten. Grundsätzlich lässt sich Fehlererkennung manuell durchführen, wie es im Satellitenbetrieb oft üblich ist. Hierfür muss ein menschlicher Experte, ein sogenannter Operator, das System überwachen. Diese Form der Überwachung ist allerdings stark von der Zeit, Verfügbarkeit und Expertise des Operators, der die Überwachung durchführt, abhängig. Ein anderer Ansatz ist die Verwendung eines dedizierten Diagnosesystems. Dieses kann das technische System permanent überwachen und selbstständig Diagnosen berechnen. Die Diagnosen können dann durch einen Experten eingesehen werden, der auf ihrer Basis Aktionen durchführen kann. Das in dieser Arbeit vorgestellte modellbasierte Diagnosesystem verwendet ein quantitatives Modell eines technischen Systems, das dessen Nominalverhalten beschreibt. Das beobachtete Verhalten des technischen Systems, gegeben durch Messwerte, wird mit seinem erwarteten Verhalten, gegeben durch simulierte Werte des Modells, verglichen und Diskrepanzen bestimmt. Jede Diskrepanz ist dabei ein Symptom. Diagnosen werden dadurch berechnet, dass zunächst zu jedem Symptom eine sogenannte Konfliktmenge berechnet wird. Dies ist eine Menge von Komponenten, sodass der Defekt einer dieser Komponenten das entsprechende Symptom erklären könnte. Mithilfe dieser Konfliktmengen werden sogenannte Treffermengen berechnet. Eine Treffermenge ist eine Menge von Komponenten, sodass der gleichzeitige Defekt aller Komponenten dieser Menge alle beobachteten Symptome erklären könnte. Jede minimale Treffermenge entspricht dabei einer Diagnose. Zur Berechnung dieser Mengen nutzt das Diagnosesystem ein Verfahren, bei dem zunächst abhängige Komponenten bestimmt werden und diese von symptombehafteten Komponenten belastet und von korrekt funktionierenden Komponenten entlastet werden. Für die einzelnen Komponenten werden Bewertungen auf Basis dieser Be- und Entlastungen berechnet und mit ihnen Diagnosen gestellt. Da das Diagnosesystem auf ausreichend genaue Modelle angewiesen ist und die manuelle Kalibrierung dieser Modelle mit erheblichem Aufwand verbunden ist, wurde ein Verfahren zur automatischen Kalibrierung entwickelt. Dieses verwendet einen Zyklischen Genetischen Algorithmus, um mithilfe von aufgezeichneten Werten der realen Komponenten Modellparameter zu bestimmen, sodass die Modelle die aufgezeichneten Daten möglichst gut reproduzieren können. Zur Evaluation der automatischen Kalibrierung wurden ein Testaufbau und verschiedene dynamische und manuell schwierig zu kalibrierende Komponenten des Qualifikationsmodells eines realen Nanosatelliten, dem SONATE-Nanosatelliten modelliert und kalibriert. Der Testaufbau bestand dabei aus einem Batteriepack, einem Laderegler, einem Tiefentladeschutz, einem Entladeregler, einem Stepper Motor HAT und einem Motor. Er wurde zusätzlich zur automatischen Kalibrierung unabhängig manuell kalibriert. Die automatisch kalibrierten Satellitenkomponenten waren ein Reaktionsrad, ein Entladeregler, Magnetspulen, bestehend aus einer Ferritkernspule und zwei Luftspulen, eine Abschlussleiterplatine und eine Batterie. Zur Evaluation des Diagnosesystems wurde die Energieversorgung des Qualifikationsmodells des SONATE-Nanosatelliten modelliert. Für die Batterien, die Entladeregler, die Magnetspulen und die Reaktionsräder wurden die vorher automatisch kalibrierten Modelle genutzt. Für das Modell der Energieversorgung wurden Fehler simuliert und diese diagnostiziert. Die Ergebnisse der Evaluation der automatischen Kalibrierung waren, dass die automatische Kalibrierung eine mit der manuellen Kalibrierung vergleichbare Genauigkeit für den Testaufbau lieferte und diese sogar leicht übertraf und dass die automatisch kalibrierten Satellitenkomponenten eine durchweg hohe Genauigkeit aufwiesen und damit für den Einsatz im Diagnosesystem geeignet waren. Die Ergebnisse der Evaluation des Diagnosesystems waren, dass die simulierten Fehler zuverlässig gefunden wurden und dass das Diagnosesystem in der Lage war die plausiblen Ursachen dieser Fehler zu diagnostizieren.
Asynchronous Traffic Shaping enabled bounded latency with low complexity for time sensitive networking without the need for time synchronization. However, its main focus is the guaranteed maximum delay. Jitter-sensitive applications may still be forced towards synchronization. This work proposes traffic damping to reduce end-to-end delay jitter. It discusses its application and shows that both the prerequisites and the guaranteed delay of traffic damping and ATS are very similar. Finally, it presents a brief evaluation of delay jitter in an example topology by means of a simulation and worst case estimation.
Insect microbiota plays an essential role on the hosts’ health and fitness, regulating their development, nutrition and immunity. The natural microbiota of bees, in particular, has been given much attention, largely because of the globally reported bee population declines. However, although the worker honey bee has been associated with distinctive and specialized microbiota, the microbiota of solitary bees has not been examined in detail, despite their enormous ecological importance. The main objectives of the present thesis were a) the bacterial community description for various solitary bee species, b) the association of the solitary bee microbiota with ecological factors such as landscape type, c) the relation of the bee foraging preferences with their nest bacterial microbiota, d) the examination of the nest building material contribution to the nest microbiota, e) the isolation of bacterial strains with beneficial or harmful properties for the solitary bee larvae and f) the pathological investigation of bacteria found in deceased solitary bee larvae.
The findings of the present study revealed a high bacterial biodiversity in the solitary bee nests. At the same time, the bacterial communities were different for each bee host species. Furthermore, it was shown that the pollen bacterial communities underwent compositional shifts reflecting a reduction in floral bacteria with progressing larval development, while a clear landscape effect was absent. The examination of the nest pollen provisions showed different foraging preferences for each included bee species. Both the pollen composition and the host species identity had a strong effect on the pollen bacteria, indicating that the pollen bacterial communities are the result of a combinatory process. The introduced environmental material also contributed to the nest natural microbiome. However, although the larval microbiota was significantly influenced by the pollen microbiota, it was not much associated with that of the nest material.
Two Paenibacillus strains isolated from O. bicornis nests showed strong antifungal activities, while several isolated strains were able to metabolize various oligosaccharides which are common in pollen and nectar. Screening for potential pathogenic bacteria in the nests of O. bicornis unveiled bacterial taxa, which dominated the bacterial community in deceased larvae, while at the same time they were undetectable in the healthy individuals.
vi
Finally, larvae which were raised in vitro developed distinct bacterial microbiomes according to their diet, while their life span was affected.
The present thesis described aspects of the microbiota dynamics in the nests of seven megachilid solitary bee nests, by suggesting which transmission pathways shape the established bacterial communities and how these are altered with larval development. Furthermore, specific bacterial taxa were associated with possible services they might provide to the larvae, while others were related with possible harmful effects. Future studies should integrate microbiota examination of different bee generations and parallel investigation of the microbiota of the nests and their surrounding environment (plant community, soil) to elucidate the bacterial transmission paths which establish the nest microbiota of solitary bees. Functional assays will also allow future studies to characterize specific nest bacteria as beneficial or harmful and describe how they assist the development of healthy bees and the fitness of bee populations.
Are there emotional reactions towards social robots? Could you love a robot? Or, put the other way round: Could you mistreat a robot, tear it apart and sell it? Media reports people honoring military robots with funerals, mourning the “death” of a robotic dog, and granting the humanoid robot Sophia citizenship. But how profound are these reactions? Three experiments take a closer look on emotional reactions towards social robots by investigating the subjective experience of people as well as the motor expressive level. Contexts of varying degrees of Human-Robot Interaction (HRI) sketch a nuanced picture of emotions towards social robots that encompass conscious as well as unconscious reactions. The findings advance the understanding of affective experiences in HRI. It also turns the initial question into: Can emotional reactions towards social robots even be avoided?
White Paper on Crowdsourced Network and QoE Measurements – Definitions, Use Cases and Challenges
(2020)
The goal of the white paper at hand is as follows. The definitions of the terms build a framework for discussions around the hype topic ‘crowdsourcing’. This serves as a basis for differentiation and a consistent view from different perspectives on crowdsourced network measurements, with the goal to provide a commonly accepted definition in the community. The focus is on the context of mobile and fixed network operators, but also on measurements of different layers (network, application, user layer). In addition, the white paper shows the value of crowdsourcing for selected use cases, e.g., to improve QoE or regulatory issues. Finally, the major challenges and issues for researchers and practitioners are highlighted.
This white paper is the outcome of the Würzburg seminar on “Crowdsourced Network and QoE Measurements” which took place from 25-26 September 2019 in Würzburg, Germany. International experts were invited from industry and academia. They are well known in their communities, having different backgrounds in crowdsourcing, mobile networks, network measurements, network performance, Quality of Service (QoS), and Quality of Experience (QoE). The discussions in the seminar focused on how crowdsourcing will support vendors, operators, and regulators to determine the Quality of Experience in new 5G networks that enable various new applications and network architectures. As a result of the discussions, the need for a white paper manifested, with the goal of providing a scientific discussion of the terms “crowdsourced network measurements” and “crowdsourced QoE measurements”, describing relevant use cases for such crowdsourced data, and its underlying challenges. During the seminar, those main topics were identified, intensively discussed in break-out groups, and brought back into the plenum several times. The outcome of the seminar is this white paper at hand which is – to our knowledge – the first one covering the topic of crowdsourced network and QoE measurements.
Virtual reality and related media and communication technologies have a growing
impact on professional application fields and our daily life. Virtual environments
have the potential to change the way we perceive ourselves and how we interact
with others. In comparison to other technologies, virtual reality allows for the
convincing display of a virtual self-representation, an avatar, to oneself and also to
others. This is referred to as user embodiment. Avatars can be of varying realism
and abstraction in their appearance and in the behaviors they convey. Such userembodying
interfaces, in turn, can impact the perception of the self as well as
the perception of interactions. For researchers, designers, and developers it is of
particular interest to understand these perceptual impacts, to apply them to therapy,
assistive applications, social platforms, or games, for example. The present thesis
investigates and relates these impacts with regard to three areas: intrapersonal
effects, interpersonal effects, and effects of social augmentations provided by the
simulation.
With regard to intrapersonal effects, we specifically explore which simulation
properties impact the illusion of owning and controlling a virtual body, as well
as a perceived change in body schema. Our studies lead to the construction of
an instrument to measure these dimensions and our results indicate that these
dimensions are especially affected by the level of immersion, the simulation latency,
as well as the level of personalization of the avatar.
With regard to interpersonal effects we compare physical and user-embodied social
interactions, as well as different degrees of freedom in the replication of nonverbal
behavior. Our results suggest that functional levels of interaction are maintained,
whereas aspects of presence can be affected by avatar-mediated interactions, and
collaborative motor coordination can be disturbed by immersive simulations.
Social interaction is composed of many unknown symbols and harmonic patterns
that define our understanding and interpersonal rapport. For successful virtual
social interactions, a mere replication of physical world behaviors to virtual environments
may seem feasible. However, the potential of mediated social interactions
goes beyond this mere replication. In a third vein of research, we propose and
evaluate alternative concepts on how computers can be used to actively engage in
mediating social interactions, namely hybrid avatar-agent technologies. Specifically,
we investigated the possibilities to augment social behaviors by modifying and
transforming user input according to social phenomena and behavior, such as nonverbal
mimicry, directed gaze, joint attention, and grouping. Based on our results
we argue that such technologies could be beneficial for computer-mediated social
interactions such as to compensate for lacking sensory input and disturbances in
data transmission or to increase aspects of social presence by visual substitution or
amplification of social behaviors.
Based on related work and presented findings, the present thesis proposes the
perspective of considering computers as social mediators. Concluding from prototypes
and empirical studies, the potential of technology to be an active mediator of social
perception with regard to the perception of the self, as well as the perception of
social interactions may benefit our society by enabling further methods for diagnosis,
treatment, and training, as well as the inclusion of individuals with social disorders.
To this regard, we discuss implications for our society and ethical aspects. This
thesis extends previous empirical work and further presents novel instruments,
concepts, and implications to open up new perspectives for the development of
virtual reality, mixed reality, and augmented reality applications.
Bridge-local latency computation is often regarded with caution, as historic efforts with the Credit-Based Shaper (CBS) showed that CBS requires network wide information for tight bounds. Recently, new shaping mechanisms and timed gates were applied to achieve such guarantees nonetheless, but they require support for these new mechanisms in the forwarding devices.
This document presents a per-hop latency bound for individual streams in a class-based network that applies the IEEE 802.1Q strict priority transmission selection algorithm. It is based on self-pacing talkers and uses the accumulated latency fields during the reservation process to provide upper bounds with bridge-local information. The presented delay bound is proven mathematically and then evaluated with respect to its accuracy. It indicates the required information that must be provided for admission control, e.g., implemented by a resource reservation protocol such as IEEE 802.1Qdd. Further, it hints at potential improvements regarding new mechanisms and higher accuracy given more information.
The importance of Clinical Data Warehouses (CDW) has increased significantly in recent years as they support or enable many applications such as clinical trials, data mining, and decision making.
CDWs integrate Electronic Health Records which still contain a large amount of text data, such as discharge letters or reports on diagnostic findings in addition to structured and coded data like ICD-codes of diagnoses.
Existing CDWs hardly support features to gain information covered in texts.
Information extraction methods offer a solution for this problem but they have a high and long development effort, which can only be carried out by computer scientists.
Moreover, such systems only exist for a few medical domains.
This paper presents a method empowering clinicians to extract information from texts on their own. Medical concepts can be extracted ad hoc from e.g. discharge letters, thus physicians can work promptly and autonomously. The proposed system achieves these improvements by efficient data storage, preprocessing, and with powerful query features. Negations in texts are recognized and automatically excluded, as well as the context of information is determined and undesired facts are filtered, such as historical events or references to other persons (family history).
Context-sensitive queries ensure the semantic integrity of the concepts to be extracted.
A new feature not available in other CDWs is to query numerical concepts in texts and even filter them (e.g. BMI > 25).
The retrieved values can be extracted and exported for further analysis.
This technique is implemented within the efficient architecture of the PaDaWaN CDW and evaluated with comprehensive and complex tests.
The results outperform similar approaches reported in the literature.
Ad hoc IE determines the results in a few (milli-) seconds and a user friendly GUI enables interactive working, allowing flexible adaptation of the extraction.
In addition, the applicability of this system is demonstrated in three real-world applications at the Würzburg University Hospital (UKW).
Several drug trend studies are replicated: Findings of five studies on high blood pressure, atrial fibrillation and chronic renal failure can be partially or completely confirmed in the UKW. Another case study evaluates the prevalence of heart failure in inpatient hospitals using an algorithm that extracts information with ad hoc IE from discharge letters and echocardiogram report (e.g. LVEF < 45 ) and other sources of the hospital information system.
This study reveals that the use of ICD codes leads to a significant underestimation (31%) of the true prevalence of heart failure.
The third case study evaluates the consistency of diagnoses by comparing structured ICD-10-coded diagnoses with the diagnoses described in the diagnostic section of the discharge letter.
These diagnoses are extracted from texts with ad hoc IE, using synonyms generated with a novel method.
The developed approach can extract diagnoses from the discharge letter with a high accuracy and furthermore it can prove the degree of consistency between the coded and reported diagnoses.
Automation in Software Performance Engineering Based on a Declarative Specification of Concerns
(2019)
Software performance is of particular relevance to software system design, operation, and evolution because it has a significant impact on key business indicators. During the life-cycle of a software system, its implementation, configuration, and deployment are subject to multiple changes that may affect the end-to-end performance characteristics. Consequently, performance analysts continually need to provide answers to and act based on performance-relevant concerns. To ensure a desired level of performance, software performance engineering provides a plethora of methods, techniques, and tools for measuring, modeling, and evaluating performance properties of software systems. However, the answering of performance concerns is subject to a significant semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted. Performance evaluation approaches come with different strengths and limitations concerning, for example, accuracy, time-to-result, or system overhead. For the involved stakeholders, it can be an elaborate process to reasonably select, parameterize and correctly apply performance evaluation approaches, and to filter and interpret the obtained results. An additional challenge is that available performance evaluation artifacts may change over time, which requires to switch between different measurement-based and model-based performance evaluation approaches during the system evolution. At model-based analysis, the effort involved in creating performance models can also outweigh their benefits.
To overcome the deficiencies and enable an automatic and holistic evaluation of performance throughout the software engineering life-cycle requires an approach that: (i) integrates multiple types of performance concerns and evaluation approaches, (ii) automates performance model creation, and (iii) automatically selects an evaluation methodology tailored to a specific scenario. This thesis presents a declarative approach —called Declarative Performance Engineering (DPE)— to automate performance evaluation based on a humanreadable specification of performance-related concerns. To this end, we separate the definition of performance concerns from their solution. The primary scientific contributions presented in this thesis are:
A declarative language to express performance-related concerns and a corresponding processing framework:
We provide a language to specify performance concerns independent of a concrete performance evaluation approach. Besides the specification of functional aspects, the language allows to include non-functional tradeoffs optionally. To answer these concerns, we provide a framework architecture and a corresponding reference implementation to process performance concerns automatically. It allows to integrate arbitrary performance evaluation approaches and is accompanied by reference implementations for model-based and measurement-based performance evaluation.
Automated creation of architectural performance models from execution traces:
The creation of performance models can be subject to significant efforts outweighing the benefits of model-based performance evaluation. We provide a model extraction framework that creates architectural performance models based on execution traces, provided by monitoring tools.The framework separates the derivation of generic information from model creation routines. To derive generic information, the framework combines state-of-the-art extraction and estimation techniques. We isolate object creation routines specified in a generic model builder interface based on concepts present in multiple performance-annotated architectural modeling formalisms. To create model extraction for a novel performance modeling formalism, developers only need to write object creation routines instead of creating model extraction software from scratch when reusing the generic framework.
Automated and extensible decision support for performance evaluation approaches:
We present a methodology and tooling for the automated selection of a performance evaluation approach tailored to the user concerns and application scenario. To this end, we propose to decouple the complexity of selecting a performance evaluation approach for a given scenario by providing solution approach capability models and a generic decision engine. The proposed capability meta-model enables to describe functional and non-functional capabilities of performance evaluation approaches and tools at different granularities. In contrast to existing tree-based decision support mechanisms, the decoupling approach allows to easily update characteristics of solution approaches as well as appending new rating criteria and thereby stay abreast of evolution in performance evaluation tooling and system technologies.
Time-to-result estimation for model-based performance prediction:
The time required to execute a model-based analysis plays an important role in different decision processes. For example, evaluation scenarios might require the prediction results to be available in a limited period of time such that the system can be adapted in time to ensure the desired quality of service. We propose a method to estimate the time-to-result for modelbased performance prediction based on model characteristics and analysis parametrization. We learn a prediction model using performancerelevant features thatwe determined using statistical tests. We implement the approach and demonstrate its practicability by applying it to analyze a simulation-based multi-step performance evaluation approach for a representative architectural performance modeling formalism.
We validate each of the contributions based on representative case studies. The evaluation of automatic performance model extraction for two case study systems shows that the resulting models can accurately predict the performance behavior. Prediction accuracy errors are below 3% for resource utilization and mostly less than 20% for service response time. The separate evaluation of the reusability shows that the presented approach lowers the implementation efforts for automated model extraction tools by up to 91%. Based on two case studies applying measurement-based and model-based performance evaluation techniques, we demonstrate the suitability of the declarative performance engineering framework to answer multiple kinds of performance concerns customized to non-functional goals. Subsequently, we discuss reduced efforts in applying performance analyses using the integrated and automated declarative approach. Also, the evaluation of the declarative framework reviews benefits and savings integrating performance evaluation approaches into the declarative performance engineering framework. We demonstrate the applicability of the decision framework for performance evaluation approaches by applying it to depict existing decision trees. Then, we show how we can quickly adapt to the evolution of performance evaluation methods which is challenging for static tree-based decision support systems. At this, we show how to cope with the evolution of functional and non-functional capabilities of performance evaluation software and explain how to integrate new approaches. Finally, we evaluate the accuracy of the time-to-result estimation for a set of machinelearning algorithms and different training datasets. The predictions exhibit a mean percentage error below 20%, which can be further improved by including performance evaluations of the considered model into the training data. The presented contributions represent a significant step towards an integrated performance engineering process that combines the strengths of model-based and measurement-based performance evaluation. The proposed performance concern language in conjunction with the processing framework significantly reduces the complexity of applying performance evaluations for all stakeholders. Thereby it enables performance awareness throughout the software engineering life-cycle. The proposed performance concern language removes the semantic gap between the level on which performance concerns are formulated and the technical level on which performance evaluations are actually conducted by the user.
Energy efficiency of computing systems has become an increasingly important issue over the last decades. In 2015, data centers were responsible for 2% of the world's greenhouse gas emissions, which is roughly the same as the amount produced by air travel.
In addition to these environmental concerns, power consumption of servers in data centers results in significant operating costs, which increase by at least 10% each year.
To address this challenge, the U.S. EPA and other government agencies are considering the use of novel measurement methods in order to label the energy efficiency of servers.
The energy efficiency and power consumption of a server is subject to a great number of factors, including, but not limited to, hardware, software stack, workload, and load level.
This huge number of influencing factors makes measuring and rating of energy efficiency challenging. It also makes it difficult to find an energy-efficient server for a specific use-case. Among others, server provisioners, operators, and regulators would profit from information on the servers in question and on the factors that affect those servers' power consumption and efficiency. However, we see a lack of measurement methods and metrics for energy efficiency of the systems under consideration.
Even assuming that a measurement methodology existed, making decisions based on its results would be challenging. Power prediction methods that make use of these results would aid in decision making. They would enable potential server customers to make better purchasing decisions and help operators predict the effects of potential reconfigurations.
Existing energy efficiency benchmarks cannot fully address these challenges, as they only measure single applications at limited sets of load levels. In addition, existing efficiency metrics are not helpful in this context, as they are usually a variation of the simple performance per power ratio, which is only applicable to single workloads at a single load level. Existing data center efficiency metrics, on the other hand, express the efficiency of the data center space and power infrastructure, not focusing on the efficiency of the servers themselves. Power prediction methods for not-yet-available systems that could make use of the results provided by a comprehensive power rating methodology are also lacking. Existing power prediction models for hardware designers have a very fine level of granularity and detail that would not be useful for data center operators.
This thesis presents a measurement and rating methodology for energy efficiency of servers and an energy efficiency metric to be applied to the results of this methodology. We also design workloads, load intensity and distribution models, and mechanisms that can be used for energy efficiency testing. Based on this, we present power prediction mechanisms and models that utilize our measurement methodology and its results for power prediction.
Specifically, the six major contributions of this thesis are:
We present a measurement methodology and metrics for energy efficiency rating of servers that use multiple, specifically chosen workloads at different load levels for a full system characterization.
We evaluate the methodology and metric with regard to their reproducibility, fairness, and relevance. We investigate the power and performance variations of test results and show fairness of the metric through a mathematical proof and a correlation analysis on a set of 385 servers. We evaluate the metric's relevance by showing the relationships that can be established between metric results and third-party applications.
We create models and extraction mechanisms for load profiles that vary over time, as well as load distribution mechanisms and policies. The models are designed to be used to define arbitrary dynamic load intensity profiles that can be leveraged for benchmarking purposes. The load distribution mechanisms place workloads on computing resources in a hierarchical manner.
Our load intensity models can be extracted in less than 0.2 seconds and our resulting models feature a median modeling error of 12.7% on average. In addition, our new load distribution strategy can save up to 10.7% of power consumption on a single server node.
We introduce an approach to create small-scale workloads that emulate the power consumption-relevant behavior of large-scale workloads by approximating their CPU performance counter profile, and we introduce TeaStore, a distributed, micro-service-based reference application. TeaStore can be used to evaluate power and performance model accuracy, elasticity of cloud auto-scalers, and the effectiveness of power saving mechanisms for distributed systems.
We show that we are capable of emulating the power consumption behavior of realistic workloads with a mean deviation less than 10% and down to 0.2 watts (1%). We demonstrate the use of TeaStore in the context of performance model extraction and cloud auto-scaling also showing that it may generate workloads with different effects on the power consumption of the system under consideration.
We present a method for automated selection of interpolation strategies for performance and power characterization. We also introduce a configuration approach for polynomial interpolation functions of varying degrees that improves prediction accuracy for system power consumption for a given system utilization.
We show that, in comparison to regression, our automated interpolation method selection and configuration approach improves modeling accuracy by 43.6% if additional reference data is available and by 31.4% if it is not.
We present an approach for explicit modeling of the impact a virtualized environment has on power consumption and a method to predict the power consumption of a software application. Both methods use results produced by our measurement methodology to predict the respective power consumption for servers that are otherwise not available to the person making the prediction.
Our methods are able to predict power consumption reliably for multiple hypervisor configurations and for the target application workloads. Application workload power prediction features a mean average absolute percentage error of 9.5%.
Finally, we propose an end-to-end modeling approach for predicting the power consumption of component placements at run-time. The model can also be used to predict the power consumption at load levels that have not yet been observed on the running system.
We show that we can predict the power consumption of two different distributed web applications with a mean absolute percentage error of 2.2%. In addition, we can predict the power consumption of a system at a previously unobserved load level and component distribution with an error of 1.2%.
The contributions of this thesis already show a significant impact in science and industry. The presented efficiency rating methodology, including its metric, have been adopted by the U.S. EPA in the latest version of the ENERGY STAR Computer Server program. They are also being considered by additional regulatory agencies, including the EU Commission and the China National Institute of Standardization. In addition, the methodology's implementation and the underlying methodology itself have already found use in several research publications.
Regarding future work, we see a need for new workloads targeting specialized server hardware. At the moment, we are witnessing a shift in execution hardware to specialized machine learning chips, general purpose GPU computing, FPGAs being embedded into compute servers, etc. To ensure that our measurement methodology remains relevant, workloads covering these areas are required. Similarly, power prediction models must be extended to cover these new scenarios.
The present dissertation investigates the management of RFID implementations in retail trade. Our work contributes to this by investigating important aspects that have so far received little attention in scientific literature. We therefore perform three studies about three important aspects of managing RFID implementations. We evaluate in our first study customer acceptance of pervasive retail systems using privacy calculus theory. The results of our study reveal the most important aspects a retailer has to consider when implementing pervasive retail systems. In our second study we analyze RFID-enabled robotic inventory taking with the help of a simulation model. The results show that retailers should implement robotic inventory taking if the accuracy rates of the robots are as high as the robots’ manufacturers claim. In our third and last study we evaluate the potentials of RFID data for supporting managerial decision making. We propose three novel methods in order to extract useful information from RFID data and propose a generic information extraction process. Our work is geared towards practitioners who want to improve their RFID-enabled processes and towards scientists conducting RFID-based research.
It is the aim of this thesis to present a visual body weight estimation, which is suitable for medical applications. A typical scenario where the estimation of the body weight is essential, is the emergency treatment of stroke patients: In case of an ischemic stroke, the patient has to receive a body weight adapted drug, to solve a blood clot in a vessel. The accuracy of the estimated weight influences the outcome of the therapy directly. However, the treatment has to start as early as possible after the arrival at a trauma room, to provide sufficient treatment. Weighing a patient takes time, and the patient has to be moved. Furthermore, patients are often not able to communicate a value for their body weight due to their stroke symptoms. Therefore, it is state of the art that physicians guess the body weight. A patient receiving a too low dose has an increased risk that the blood clot does not dissolve and brain tissue is permanently damaged. Today, about one-third gets an insufficient dosage. In contrast to that, an overdose can cause bleedings and further complications. Physicians are aware of this issue, but a reliable alternative is missing.
The thesis presents state-of-the-art principles and devices for the measurement and estimation of body weight in the context of medical applications. While scales are common and available at a hospital, the process of weighing takes too long and can hardly be integrated into the process of stroke treatment. Sensor systems and algorithms are presented in the section for related work and provide an overview of different approaches.
The here presented system -- called Libra3D -- consists of a computer installed in a real trauma room, as well as visual sensors integrated into the ceiling. For the estimation of the body weight, the patient is on a stretcher which is placed in the field of view of the sensors. The three sensors -- two RGB-D and a thermal camera -- are calibrated intrinsically and extrinsically. Also, algorithms for sensor fusion are presented to align the data from all sensors which is the base for a reliable segmentation of the patient.
A combination of state-of-the-art image and point cloud algorithms is used to localize the patient on the stretcher. The challenges in the scenario with the patient on the bed is the dynamic environment, including other people or medical devices in the field of view.
After the successful segmentation, a set of hand-crafted features is extracted from the patient's point cloud. These features rely on geometric and statistical values and provide a robust input to a subsequent machine learning approach. The final estimation is done with a previously trained artificial neural network.
The experiment section offers different configurations of the previously extracted feature vector. Additionally, the here presented approach is compared to state-of-the-art methods; the patient's own assessment, the physician's guess, and an anthropometric estimation. Besides the patient's own estimation, Libra3D outperforms all state-of-the-art estimation methods: 95 percent of all patients are estimated with a relative error of less than 10 percent to ground truth body weight. It takes only a minimal amount of time for the measurement, and the approach can easily be integrated into the treatment of stroke patients, while physicians are not hindered.
Furthermore, the section for experiments demonstrates two additional applications: The extracted features can also be used to estimate the body weight of people standing, or even walking in front of a 3D camera. Also, it is possible to determine or classify the BMI of a subject on a stretcher. A potential application for this approach is the reduction of the radiation dose of patients being exposed to X-rays during a CT examination.
During the time of this thesis, several data sets were recorded. These data sets contain the ground truth body weight, as well as the data from the sensors. They are available for the collaboration in the field of body weight estimation for medical applications.
The field of human-computer interaction (HCI) strives for innovative user interfaces. Innovative and novel user interfaces are a challenge for a growing population of older users and endanger older adults to be excluded from an increasingly digital world. This is because older adults often have lower cognitive abilities and little prior experiences with technology.
This thesis aims at resolving the tension between innovation and age-inclusiveness by developing user interfaces that can be used regardless of cognitive abilities and technology-dependent prior knowledge.
The method of image-schematic metaphors holds promises for innovative and age-inclusive interaction design. Image-schematic metaphors represent a form of technology-independent prior knowledge. They reveal basic mental models and can be gathered in language (e.g. bank account is container from "I put money into my bank account").
Based on a discussion of previous applications of image-schematic metaphors in HCI, the present work derives three empirical research questions regarding image-schematic metaphors for innovative and age-inclusive interaction design.
The first research question addresses the yet untested assumption that younger and older adults overlap in their technology-independent prior knowledge and, therefore, their usage of image-schematic metaphors. In study 1, a total of 41 participants described abstract concepts from the domains of online banking and everyday life. In study 2, ten contextual interviews were conducted. In both studies, younger and older adults showed a substantial overlap of 70% to 75%, indicating that also their mental models overlap substantially.
The second research question addresses the applicability and potential of image-schematic metaphors for innovative design from the perspective of designers. In study 3, 18 student design teams completed an ideation process with either an affinity diagram as the industry standard, image-schematic metaphors or both methods in combination and created paper prototypes. The image-schematic metaphor method alone, but not the combination of both methods, was readily adopted and applied just as a well as the more familiar standard method.
In study 4, professional interaction designers created prototypes either with or without image-schematic metaphors. In both studies, the method of image-schematic metaphors was perceived as applicable and creativity stimulating.
The third research question addresses whether designs that explicitly follow image-schematic metaphors are more innovative and age-inclusive regarding differences in cognitive abilities and prior technological knowledge. In two experimental studies (study 5 and 6) involving a total of 54 younger and 53 older adults, prototypes that were designed with image-schematic metaphors were perceived as more innovative compared to those who were designed without image-schematic metaphors. Moreover, the impact of prior technological knowledge on interaction was reduced for prototypes that had been designed with image-schematic metaphors. However, participants' cognitive abilities and age still influenced the interaction significantly.
The present work provides empirical as well as methodological findings that can help to promote the method of image-schematic metaphors in interaction design. As a result of these studies it can be concluded that the image-schematic metaphors are an applicable and effective method for innovative user interfaces that can be used regardless of prior technological knowledge.
With the introduction of Software-defined Networking (SDN) in the late 2000s, not only a new research field has been created, but a paradigm shift was initiated in the broad field of networking. The programmable network control by SDN is a big step, but also a stumbling block for many of the established network operators and vendors. As with any new technology the question about the maturity and the productionreadiness of it arises. Therefore, this thesis picks specific features of SDN and analyzes its performance, reliability, and availability in scenarios that can be expected in production deployments.
The first SDN topic is the performance impact of application traffic in the data plane on the control plane. Second, reliability and availability concerns of SDN deployments are exemplary analyzed by evaluating the detection performance of a common SDN controller. Thirdly, the performance of P4, a technology that enhances SDN, or better its impact of certain control operations on the processing performance is evaluated.
Telemedicine uses telecommunication and information technology to provide health care services over spatial distances. In the upcoming demographic changes towards an older average population age, especially rural areas suffer from a decreasing doctor to patient ratio as well as a limited amount of available medical specialists in acceptable distance. These areas could benefit the most from telemedicine applications as they are known to improve access to medical services, medical expertise and can also help to mitigate critical or emergency situations. Although the possibilities of telemedicine applications exist in the entire range of healthcare, current systems focus on one specific disease while using dedicated hardware to connect the patient with the supervising telemedicine center.
This thesis describes the development of a telemedical system which follows a new generic design approach. This bridges the gap of existing approaches that only tackle one specific application. The proposed system on the contrary aims at supporting as many diseases and use cases as possible by taking all the stakeholders into account at the same time. To address the usability and acceptance of the system it is designed to use standardized hardware like commercial medical sensors and smartphones for collecting medical data of the patients and transmitting them to the telemedical center. The smartphone can also act as interface to the patient for health questionnaires or feedback.
The system can handle the collection and transport of medical data, analysis and visualization of the data as well as providing a real time communication with video and audio between the users.
On top of the generic telemedical framework the issue of scalability is addressed by integrating a rule-based analysis tool for the medical data. Rules can be easily created by medical personnel via a visual editor and can be personalized for each patient. The rule-based analysis tool is extended by multiple options for visualization of the data, mechanisms to handle complex rules and options for performing actions like raising alarms or sending automated messages.
It is sometimes hard for the medical experts to formulate their knowledge into rules and there may be information in the medical data that is not yet known. This is why a machine learning module was integrated into the system. It uses the incoming medical data of the patients to learn new rules that are then presented to the medical personnel for inspection. This is in line with European legislation where the human still needs to be in charge of such decisions.
Overall, we were able to show the benefit of the generic approach by evaluating it in three completely different medical use cases derived from specific application needs: monitoring of COPD (chronic obstructive pulmonary disease) patients, support of patients performing dialysis at home and councils of intensive-care experts. In addition the system was used for a non-medical use case: monitoring and optimization of industrial machines and robots. In all of the mentioned cases, we were able to prove the robustness of the generic approach with real users of the corresponding domain. This is why we can propose this approach for future development of telemedical systems.
The Software Defined Networking (SDN) paradigm offers network operators numerous improvements in terms of flexibility, scalability, as well as cost efficiency and vendor independence. However, in order to maximize the benefit from these features, several new challenges in areas such as management and orchestration need to be addressed. This dissertation makes contributions towards three key topics from these areas.
Firstly, we design, implement, and evaluate two multi-objective heuristics for the SDN controller placement problem. Secondly, we develop and apply mechanisms for automated decision making based on the Pareto frontiers that are returned by the multi-objective optimizers. Finally, we investigate and quantify the performance benefits for the SDN control plane that can be achieved by integrating information from external entities such as Network Management Systems (NMSs) into the control loop. Our evaluation results demonstrate the impact of optimizing various parameters of softwarized networks at different levels and are used to derive guidelines for an efficient operation.
The success of semantic systems has been proven over the last years.
Nowadays, Linked Data is the driver for the rapid development of ever new intelligent systems.
Especially in enterprise environments semantic systems successfully support more and more business processes.
This is especially true for after sales service in the mechanical engineering domain.
Here, service technicians need effective access to relevant technical documentation in order to diagnose and solve problems and defects.
Therefore, the usage of semantic information retrieval systems has become the new system metaphor.
Unlike classical retrieval software Linked Enterprise Data graphs are exploited to grant targeted and problem-oriented access to relevant documents.
However, huge parts of legacy technical documents have not yet been integrated into Linked Enterprise Data graphs.
Additionally, a plethora of information models for the semantic representation of technical information exists.
The semantic maturity of these information models can hardly be measured.
This thesis motivates that there is an inherent need for a self-contained semantification approach for technical documents.
This work introduces a maturity model that allows to quickly assess existing documentation.
Additionally, the approach comprises an abstracting semantic representation for technical documents that is aligned to all major standard information models.
The semantic representation combines structural and rhetorical aspects to provide access to so called Core Documentation Entities.
A novel and holistic semantification process describes how technical documents in different legacy formats can be transformed to a semantic and linked representation.
The practical significance of the semantification approach depends on tools supporting its application.
This work presents an accompanying tool chain of semantification applications, especially the semantification framework CAPLAN that is a highly integrated development and runtime environment for semantification processes.
The complete semantification approach is evaluated in four real-life projects: in a spare part augmentation project, semantification projects for earth moving technology and harvesting technology, as well as an ontology population project for special purpose vehicles.
Three additional case studies underline the broad applicability of the presented ideas.
In this thesis various aspects of Quality of Experience (QoE) research are examined. The work is divided into three major blocks: QoE Assessment, QoE Monitoring, and VNF Performance Evaluation. First, prominent cloud applications such as Google Docs and a cloud-based photo album are explored. The QoE is characterized and the influence of packet loss and delay is studied. Afterwards, objective QoE monitoring for HTTP Adaptive Video Streaming (HAS) in the cloud is investigated. Additionally, by using a Virtual Network Function (VNF) for QoE monitoring in the cloud, the feasibility of an interworking of Network Function Virtualization (NFV) and cloud paradigm is evaluated. To this end, a VNF that exploits deep packet inspection technique was used to parse the video traffic. An algorithm is then designed accordingly to estimate video quality and QoE based on network and application layer parameters. To assess the accuracy of the estimation, the VNF is measured in different scenarios under different network QoS and the virtual environment of the cloud architecture. The insights show that the different geographical deployments of the VNF influence the accuracy of the video quality and QoE estimation. Various Service Function Chain (SFC) placement algorithms have been proposed and compared in the context of edge cloud networks. On the one hand, this research is aimed at cloud service providers by providing methods for evaluating QoE for cloud applications. On the other hand, network operators can learn the pitfalls and disadvantages of using the NFV paradigm for such a QoE monitoring mechanism.
A key functionality of cloud systems are automated resource management mechanisms at the infrastructure level. As part of this, elastic scaling of allocated resources is realized by so-called auto-scalers that are supposed to match the current demand in a way that the performance remains stable while resources are efficiently used.
The process of rating cloud infrastructure offerings in terms of the quality of their achieved elastic scaling remains undefined. Clear guidance for the selection and configuration of an auto-scaler for a given context is not available. Thus, existing operating solutions are optimized in a highly application specific way and usually kept undisclosed.
The common state of practice is the use of simplistic threshold-based approaches. Due to their reactive nature they incur performance degradation during the minutes of provisioning delays. In the literature, a high-number of auto-scalers has been proposed trying to overcome the limitations of reactive mechanisms by employing proactive prediction methods.
In this thesis, we identify potentials in automated cloud system resource management and its evaluation methodology. Specifically, we make the following contributions:
We propose a descriptive load profile modeling framework together with automated model extraction from recorded traces to enable reproducible workload generation with realistic load intensity variations. The proposed Descartes Load Intensity Model (DLIM) with its Limbo framework provides key functionality to stress and benchmark resource management approaches in a representative and fair manner.
We propose a set of intuitive metrics for quantifying timing, stability and accuracy aspects of elasticity. Based on these metrics, we propose a novel approach for benchmarking the elasticity of Infrastructure-as-a-Service (IaaS) cloud platforms independent of the performance exhibited by the provisioned underlying resources.
We tackle the challenge of reducing the risk of relying on a single proactive auto-scaler by proposing a new self-aware auto-scaling mechanism, called Chameleon, combining multiple different proactive methods coupled with a reactive fallback mechanism.
Chameleon employs on-demand, automated time series-based forecasting methods to predict the arriving load intensity in combination with run-time service demand estimation techniques to calculate the required resource consumption per work unit without the need for a detailed application instrumentation. It can also leverage application knowledge by solving product-form queueing networks used to derive optimized scaling actions. The Chameleon approach is first in resolving conflicts between reactive and proactive scaling decisions in an intelligent way.
We are confident that the contributions of this thesis will have a long-term impact on the way cloud resource management approaches are assessed. While this could result in an improved quality of autonomic management algorithms, we see and discuss arising challenges for future research in cloud resource management and its assessment methods: The adoption of containerization on top of virtual machine instances introduces another level of indirection. As a result, the nesting of virtual resources increases resource fragmentation and causes unreliable provisioning delays. Furthermore, virtualized compute resources tend to become more and more inhomogeneous associated with various priorities and trade-offs. Due to DevOps practices, cloud hosted service updates are released with a higher frequency which impacts the dynamics in user behavior.
Almost once a week broadcasts about earthquakes, hurricanes, tsunamis, or forest fires are filling the news. While oneself feels it is hard to watch such news, it is even harder for rescue troops to enter such areas. They need some skills to get a quick overview of the devastated area and find victims. Time is ticking, since the chance for survival shrinks the longer it takes till help is available. To coordinate the teams efficiently, all information needs to be collected at the command center. Therefore, teams investigate the destroyed houses and hollow spaces for victims. Doing so, they never can be sure that the building will not fully collapse while they
are inside. Here, rescue robots are welcome helpers, as they are replaceable and make work more secure. Unfortunately, rescue robots are not usable off-the-shelf, yet.
There is no doubt, that such a robot has to fulfil essential requirements to successfully accomplish a rescue mission. Apart from the mechanical requirements it has to be able to build
a 3D map of the environment. This is essential to navigate through rough terrain and fulfil manipulation tasks (e.g. open doors). To build a map and gather environmental information, robots are equipped with multiple sensors. Since laser scanners produce precise measurements and support a wide scanning range, they are common visual sensors utilized for mapping.
Unfortunately, they produce erroneous measurements when scanning transparent (e.g. glass, transparent plastic) or specular reflective objects (e.g. mirror, shiny metal). It is understood that such objects can be everywhere and a pre-manipulation to prevent their influences is impossible. Using additional sensors also bear risks.
The problem is that these objects are occasionally visible, based on the incident angle of the laser beam, the surface, and the type of object. Hence, for transparent objects, measurements might result from the object surface or objects behind it. For specular reflective objects, measurements might result from the object surface or a mirrored object. These mirrored objects are illustrated behind the surface which is wrong. To obtain a precise map, the surfaces need to
be recognised and mapped reliably. Otherwise, the robot navigates into it and crashes. Further, points behind the surface should be identified and treated based on the object type. Points behind a transparent surface should remain as they represent real objects. In contrast, Points behind a specular reflective surface should be erased. To do so, the object type needs to be classified. Unfortunately, none of the current approaches is capable to fulfil these requirements.
Therefore, the following thesis addresses this problem to detect transparent and specular reflective objects and to identify their influences. To give the reader a start up, the first chapters
describe: the theoretical background concerning propagation of light; sensor systems applied for range measurements; mapping approaches used in this work; and the state-of-the-art concerning detection and identification of transparent and specular reflective objects. Afterwards, the Reflection-Identification-Approach, which is the core of subject thesis is presented. It describes 2D and a 3D implementation to detect and classify such objects. Both are available as ROS-nodes. In the next chapter, various experiments demonstrate the applicability and reliability of these nodes. It proves that transparent and specular reflective objects can be detected and classified. Therefore, a Pre- and Post-Filter module is required in 2D. In 3D, classification is possible solely with the Pre-Filter. This is due to the higher amount of measurements. An
example shows that an updatable mapping module allows the robot navigation to rely on refined maps. Otherwise, two individual maps are build which require a fusion afterwards. Finally, the
last chapter summarizes the results and proposes suggestions for future work.
Understanding human navigation behavior has implications for a wide range of application scenarios. For example, insights into geo-spatial navigation in urban areas can impact city planning or public transport. Similarly, knowledge about navigation on the web can help to improve web site structures or service experience.
In this work, we focus on a hypothesis-driven approach to address the task of understanding human navigation: We aim to formulate and compare ideas — for example stemming from existing theory, literature, intuition, or previous experiments — based on a given set of navigational observations. For example, we may compare whether tourists exploring a city walk “short distances” before taking their next photo vs. they tend to "travel long distances between points of interest", or whether users browsing Wikipedia "navigate semantically" vs. "click randomly".
For this, the Bayesian method HypTrails has recently been proposed. However, while HypTrails is a straightforward and flexible approach, several major challenges remain:
i) HypTrails does not account for heterogeneity (e.g., incorporating differently behaving user groups such as tourists and locals is not possible), ii) HypTrails does not support the user in conceiving novel hypotheses when confronted with a large set of possibly relevant background information or influence factors, e.g., points of interest, popularity of locations, time of the day, or user properties, and finally iii) formulating hypotheses can be technically challenging depending on the application scenario (e.g., due to continuous observations or temporal constraints). In this thesis, we address these limitations by introducing various novel methods and tools and explore a wide range of case studies.
In particular, our main contributions are the methods MixedTrails and SubTrails which specifically address the first two limitations: MixedTrails is an approach for hypothesis comparison that extends the previously proposed HypTrails method to allow formulating and comparing heterogeneous hypotheses (e.g., incorporating differently behaving user groups). SubTrails is a method that supports hypothesis conception by automatically discovering interpretable subgroups with exceptional navigation behavior. In addition, our methodological contributions also include several tools consisting of a distributed implementation of HypTrails, a web application for visualizing geo-spatial human navigation in the context of background information, as well as a system for collecting, analyzing, and visualizing mobile participatory sensing data.
Furthermore, we conduct case studies in many application domains, which encompass — among others — geo-spatial navigation based on photos from the photo-sharing platform Flickr, browsing behavior on the social tagging system BibSonomy, and task choosing behavior on a commercial crowdsourcing platform. In the process, we develop approaches to cope with application specific subtleties (like continuous observations and temporal constraints). The corresponding studies illustrate the variety of domains and facets in which navigation behavior can be studied and, thus, showcase the expressiveness, applicability, and flexibility of our methods. Using these methods, we present new aspects of navigational phenomena which ultimately help to better understand the multi-faceted characteristics of human navigation behavior.
Biologically inspired self-organization methods can help to manage the access control to the shared communication medium of Wireless Sensor Networks. One lightweight approach is the primitive of desynchronization, which relies on the periodic transmission of short control messages – similar to the periodical pulses of oscillators. This primitive of desynchronization has already been successfully implemented as MAC protocol for single-hop topologies. Moreover, there are also some concepts of such a protocol formulti-hop topologies available. However, the existing implementations may handle just a certain class of multi-hop topologies or are not robust against topology dynamics. In addition to the sophisticated access control of the sensor nodes of a Wireless Sensor Network in arbitrary multi-hop topologies, the communication protocol has to be lightweight, applicable, and scalable. These characteristics are of particular interest for distributed and randomly deployed networks (e.g., by dropping nodes off an airplane).
In this work we present the development of a self-organizing MAC protocol for dynamic multi-hop topologies. This implies the evaluation of related work, the conception of our new communication protocol based on the primitive of desynchronization as well as its implementation for sensor nodes. As a matter of course, we also analyze our realization with
regard to our specific requirements. This analysis is based on several (simulative as well as real-world) scenarios. Since we are mainly interested in the convergence behavior of our
protocol, we do not focus on the "classical" network issues, like routing behavior or data rate, within this work. Nevertheless, for this purpose we make use of several real-world testbeds, but also of our self-developed simulation framework.
According to the results of our evaluation phase, our self-organizing MAC protocol for WSNs, which is based on the primitive of desynchronization, meets all our demands. In fact, our communication protocol operates in arbitrary multi-hop topologies and copes well with topology dynamics. In this regard, our protocol is the first and only MAC protocol to the best of our knowledge. Moreover, due to its periodic transmission scheme, it may be an appropriate starting base for additional network services, like time synchronization or routing.
The design and implementation of a satellite mission
is divided into several different phases. Parallel to these phases an evolution of requirements will take place. Because so many people in different locations and from different background have to work in different subsystems concurrently the ideas and concepts of different subsystems and different locations will diverge. We have to bring them together again. To do this we introduce synchronization points. We bring representatives from all subsystems and all location in a Concurrent Engineering Facility (CEF) room together. Between CEF sessions the different subsystems will diverge again, but each time the
diversion will be smaller. Our subjective experience from test projects says this CEF sessions are most effective in the first phases of the development, from Requirements engineering until first coarse design. After Design and the concepts are fix, the developers are going to implementation and the concept divergences will be much smaller, therefore the CEF sessions are not a very big help any more.
Additive Fertigung – oftmals plakativ „3D-Druck“ genannt – bezeichnet eine Fertigungstechnologie, die die Herstellung physischer Gegenstände auf Basis digitaler, dreidimensionaler Modelle ermöglicht. Das grundlegende Funktionsprinzip und die Gemeinsamkeit aller additiven bzw. generativen Fertigungsverfahren ist die schichtweise Erzeugung des Objekts. Zu den wesentlichen Vorteilen der Technologie gehört die Designfreiheit, die die Integration komplexer Geometrien erlaubt.
Aufgrund der zunehmenden Verfügbarkeit kostengünstiger Geräte für den Heimgebrauch und der wachsenden Marktpräsenz von Druckdienstleistern steht die Technologie erstmals Endkunden in einer Art und Weise zur Verfügung wie es vormals, aufgrund hoher Kosten, lediglich großen Konzernen vorbehalten war. Infolgedessen ist die additive Fertigung vermehrt in den Fokus der breiten Öffentlichkeit geraten. Jedoch haben sich Wissenschaft und Forschung bisher vor allem mit Verfahrens- und Materialfragen befasst. Insbesondere Fragestellungen zu wirtschaftlichen und gesellschaftlichen Auswirkungen haben hingegen kaum Beachtung gefunden. Aus diesem Grund untersucht die vorliegende Dissertation die vielfältigen Implikationen und Auswirkungen der Technologie.
Zunächst werden Grundlagen der Fertigungstechnologie erläutert, die für das Verständnis der Arbeit eine zentrale Rolle spielen. Neben dem elementaren Funktionsprinzip der Technologie werden relevante Begrifflichkeiten aus dem Kontext der additiven Fertigung vorgestellt und zueinander in Beziehung gesetzt.
Im weiteren Verlauf werden dann Entwicklung und Akteure der Wertschöpfungskette der additiven Fertigung skizziert. Anschließend werden diverse Geschäftsmodelle im Kontext der additiven Fertigung systematisch visualisiert und erläutert. Ein weiterer wichtiger Aspekt sind die zu erwartenden wirtschaftlichen Potentiale, die sich aus einer Reihe technischer Charakteristika ableiten lassen. Festgehalten werden kann, dass der Gestaltungsspielraum von Fertigungssystemen hinsichtlich Komplexität, Effizienzsteigerung und Variantenvielfalt erweitert wird. Die gewonnenen Erkenntnisse werden außerdem genutzt, um zwei Vertreter der Branche exemplarisch mithilfe von Fallstudien zu analysieren.
Eines der untersuchten Fallbeispiele ist die populäre Online-Plattform und -Community Thingiverse, die das Veröffentlichen, Teilen und Remixen einer Vielzahl von druckbaren digitalen 3D-Modellen ermöglicht. Das Remixen, ursprünglich bekannt aus der Musikwelt, wird im Zuge des Aufkommens offener Online-Plattformen heute beim Entwurf beliebiger physischer Dinge eingesetzt. Trotz der unverkennbaren Bedeutung sowohl für die Quantität als auch für die Qualität der Innovationen auf diesen Plattformen, ist über den Prozess des Remixens und die Faktoren, die diese beeinflussen, wenig bekannt. Aus diesem Grund werden die Remix-Aktivitäten der Plattform explorativ analysiert. Auf Grundlage der Ergebnisse der Untersuchung werden fünf Thesen sowie praxisbezogene Empfehlungen bzw. Implikationen formuliert. Im Vordergrund der Analyse stehen die Rolle von Remixen in Design-Communities, verschiedene Muster im Prozess des Remixens, Funktionalitäten der Plattform, die das Remixen fördern und das Profil der remixenden Nutzerschaft.
Aufgrund enttäuschter Erwartungen an den 3D-Druck im Heimgebrauch wurde dieser demokratischen Form der Produktion kaum Beachtung geschenkt. Richtet man den Fokus jedoch nicht auf die Technik, sondern die Hobbyisten selbst, lassen sich neue Einblicke in die zugrunde liegenden Innovationsprozesse gewinnen. Die Ergebnisse einer qualitativen Studie mit über 75 Designern zeigen unter anderem, dass Designer das Konzept des Remixens bereits verinnerlicht haben und dieses über die Plattform hinaus in verschiedenen Kontexten einsetzen. Ein weiterer Beitrag, der die bisherige Theorie zu Innovationsprozessen erweitert, ist die Identifikation und Beschreibung von sechs unterschiedlichen Remix-Prozessen, die sich anhand der Merkmale Fähigkeiten, Auslöser und Motivation unterscheiden lassen.
Dementia is a complex neurodegenerative syndrome that by 2050 could affect about 135 Million people worldwide. People with dementia experience a progressive decline in their cognitive abilities and have serious problems coping with activities of daily living, including
orientation and wayfinding tasks. They even experience difficulties in finding their way in a familiar environment. Being lost or fear of getting lost may consequently develop into other psychological deficits such as anxiety, suspicions, illusions, and aggression. Frequent results are social isolation and a reduced quality of life. Moreover, the lives of relatives and
caregivers of people with dementia are also negatively affected.
Regarding navigation and orientation, most existing approaches focus on outdoor environment and people with mild dementia, who have the capability to use mobile devices. However, Rasquin (2007) observe that even a device with three buttons may be too complicated for
people with moderate to severe dementia. In addition, people who are living in care homes mainly perform indoor activities. Given this background, we decided to focus on designing a system for indoor environments for people with moderate to severe dementia, who are unable
or reluctant to use smartphone technology.
Adopting user-centered design approach, context and requirements of people with dementia were gathered as a first step to understand needs and difficulties (especially in spatial disorientation and wayfinding problems) experienced in dementia care facilities. Then, an "Implicit Interactive Intelligent (III) Environment" for people with dementia was proposed emphasizing implicit interaction and natural interface. The backbone of this III Environment is based on supporting orientation and navigation tasks with three systems: a Monitoring system, an intelligent system, and a guiding system. The monitoring system and intelligent system automatically detect and interpret the locations and activities performed by the users i.e. people with dementia. This approach (implicit input) reduces cognitive workload as well as physical workload on the user to provide input. The intelligent system is also aware of context, predicts next situations (location, activity), and decides when to provide an appropriate service to the users. The guiding system with intuitive and dynamic environmental cues (lighting with color) has the responsibility for guiding the users to the places they need to be.
Overall, three types of a monitoring system with Ultra-Wideband and iBeacon technologies, different techniques and algorithms were implemented for different contexts of use.
They showed a high user acceptance with a reasonable price as well as decent accuracy and precision. In the intelligent system, models were built to recognize the users’ current activity, detect the erroneous activity, predict the next location and activity, and analyze the
history data, detect issues, notify them and suggest solutions to caregivers via visualized web interfaces. About the guiding systems, five studies were conducted to test and evaluate the effect of lighting with color on people with dementia. The results were promising. Although
several components of III Environment in general and three systems, in particular, are in place (implemented and tested separately), integrating them all together and employing this in the dementia context as a fully properly evaluation with formal stakeholders (people with
dementia and caregivers) are needed for the future step.
Mini Unmanned Aerial Vehicles (MUAVs) are becoming popular research platform and
drawing considerable attention, particularly during the last decade due to their afford- ability and multi-dimensional applications in almost every walk of life. MUAVs have obvious advantages over manned platforms including their much lower manufacturing and operational costs, risk avoidance for human pilots, flying safely low and slow, and realization of operations that are beyond inherent human limitations. The advancement in Micro Electro-Mechanical System (MEMS) technology, Avionics and miniaturization of sensors also played a significant role in the evolution of MUAVs. These vehicles range from simple toys found at electronic supermarkets for entertainment purpose to highly sophisticated commercial platforms performing novel assignments like offshore wind power station inspection and 3D modelling of buildings etc. MUAVs are also more environment friendly as they cause less air pollution and noise. Unmanned is therefore unmatched. Recent research focuses on use of multiple inexpensive vehicles flying together, while maintaining required relative separations, to carry out the tasks efficiently compared to a single exorbitant vehicle. Redundancy also does away the risk of loss of a single whole-mission dependent vehicle. Some of the valuable applications in the domain of cooperative control include joint load transportation, search and rescue, mobile communication relays, pesticide spraying and weather monitoring etc. Though realization of multi-UAV coupled flight is complex, however obvious advantages justify
the laborious work involved...
The thesis focuses on Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) and traffic management in access networks to improve the QoE of HAS. First, the QoE impact of adaptation parameters and time on layer was investigated with subjective crowdsourcing studies. The results were used to compute a QoE-optimal adaptation strategy for given video and network conditions. This allows video service providers to develop and benchmark improved adaptation logics for HAS. Furthermore, the thesis investigated concepts to monitor video QoE on application and network layer, which can be used by network providers in the QoE-aware traffic management cycle. Moreover, an analytic and simulative performance evaluation of QoE-aware traffic management on a bottleneck link was conducted. Finally, the thesis investigated socially-aware traffic management for HAS via Wi-Fi offloading of mobile HAS flows. A model for the distribution of public Wi-Fi hotspots and a platform for socially-aware traffic management on private home routers was presented. A simulative performance evaluation investigated the impact of Wi-Fi offloading on the QoE and energy consumption of mobile HAS.
The importance of enterprise systems is increasingly growing and they are in the center of attention and consideration by organizations in various types of business and industries from extra-large public or private organizations to small and medium-sized service sector business. These systems are continuously advancing functionally and technologically and are inevitable and ineluctable for the enterprises to maximize their productivity and integration in current competitive national and global business environments.
Also, since local software solutions could not meet the requirements of especially large enterprises functionally and technically, and as giant global enterprise software producers like SAP, Oracle and Microsoft are improving their solutions rapidly and since they are expanding their market to more corners of the globe, demand for these globally branded low-defect software solutions is daily ascending. The agreements for international ERP implementation project consultancy are, therefore, exponentially increasing, while the research on the influencing factors and know-hows is scattered and rare, and thus, a timely urgency for this field of research is being felt.
The final developed five-in-five framework of this study, for the first time, collects all mentioned-in-the-history critical success factors and project activities, while sequencing them in five phases and categorizing them in five focus areas for international ERP implementation projects. This framework provides a bird’s-eye view and draws a comprehensive roadmap or instruction for such projects.
The field of genetics faces a lot of challenges and opportunities in both research and diagnostics due to the rise of next generation sequencing (NGS), a technology that allows to sequence DNA increasingly fast and cheap.
NGS is not only used to analyze DNA, but also RNA, which is a very similar molecule also present in the cell, in both cases producing large amounts of data.
The big amount of data raises both infrastructure and usability problems, as powerful computing infrastructures are required and there are many manual steps in the data analysis which are complicated to execute.
Both of those problems limit the use of NGS in the clinic and research, by producing a bottleneck both computationally and in terms of manpower, as for many analyses geneticists lack the required computing skills.
Over the course of this thesis we investigated how computer science can help to improve this situation to reduce the complexity of this type of analysis.
We looked at how to make the analysis more accessible to increase the number of people that can perform OMICS data analysis (OMICS groups various genomics data-sources).
To approach this problem, we developed a graphical NGS data analysis pipeline aimed at a diagnostics environment while still being useful in research in close collaboration with the Human Genetics Department at the University of Würzburg.
The pipeline has been used in various research papers on covering subjects, including works with direct author participation in genomics, transcriptomics as well as epigenomics.
To further validate the graphical pipeline, a user survey was carried out which confirmed that it lowers the complexity of OMICS data analysis.
We also studied how the data analysis can be improved in terms of computing infrastructure by improving the performance of certain analysis steps.
We did this both in terms of speed improvements on a single computer (with notably variant calling being faster by up to 18 times), as well as with distributed computing to better use an existing infrastructure.
The improvements were integrated into the previously described graphical pipeline, which itself also was focused on low resource usage.
As a major contribution and to help with future development of parallel and distributed applications, for the usage in genetics or otherwise, we also looked at how to make it easier to develop such applications.
Based on the parallel object programming model (POP), we created a Java language extension called POP-Java, which allows for easy and transparent distribution of objects.
Through this development, we brought the POP model to the cloud, Hadoop clusters and present a new collaborative distributed computing model called FriendComputing.
The advances made in the different domains of this thesis have been published in various works specified in this document.
The progress which has been made in semiconductor chip production in recent years enables a multitude of cores on a single die. However, due to further decreasing structure sizes, fault tolerance and energy consumption will represent key challenges. Furthermore, an efficient communication infrastructure is indispensable due to the high parallelism at those systems. The predominant communication system at such highly parallel systems is a Network on Chip (NoC). The focus of this thesis is on NoCs which are based on deflection routing. In this context, contributions are made to two domains, fault tolerance and dimensioning of the optimal link width. Both aspects are essential for the application of reliable, energy efficient, and deflection routing based NoCs.
It is expected that future semiconductor systems have to cope with high fault probabilities. The inherently given high connectivity of most NoC topologies can be exploited to tolerate the breakdown of links and other components. In this thesis, a fault-tolerant router architecture has been developed, which stands out for the deployed interconnection architecture and the method to overcome complex fault situations. The presented simulation results show, all data packets arrive at their destination, even at high fault probabilities. In contrast to routing table based architectures, the hardware costs of the herein presented architecture are lower and, in particular, independent of the number of components in the network.
Besides fault tolerance, hardware costs and energy efficiency are of great importance. The utilized link width has a decisive influence on these aspects. In particular, at deflection routing based NoCs, over- and under-sizing of the link width leads to unnecessary high hardware costs and bad performance, respectively. In the second part of this thesis, the optimal link width at deflection routing based NoCs is investigated. Additionally, a method to reduce the link width is introduced. Simulation and synthesis results show, the herein presented method allows a significant reduction of hardware costs at comparable performance.
RNA-binding proteins (RBPs) have been extensively studied in eukaryotes, where they post-transcriptionally regulate many cellular events including RNA transport, translation, and stability. Experimental techniques, such as cross-linking and co-purification followed by either mass spectrometry or RNA sequencing has enabled the identification and characterization of RBPs, their conserved RNA-binding domains (RBDs), and the regulatory roles of these proteins on a genome-wide scale. These developments in quantitative, high-resolution, and high-throughput screening techniques have greatly expanded our understanding of RBPs in human and yeast cells. In contrast, our knowledge of number and potential diversity of RBPs in bacteria is comparatively poor, in part due to the technical challenges associated with existing global screening approaches developed in eukaryotes.
Genome- and proteome-wide screening approaches performed in silico may circumvent these technical issues to obtain a broad picture of the RNA interactome of bacteria and identify strong RBP candidates for more detailed experimental study. Here, I report APRICOT (“Analyzing Protein RNA Interaction by Combined Output Technique”), a computational pipeline for the sequence-based identification and characterization of candidate RNA-binding proteins encoded in the genomes of all domains of life using RBDs known from experimental studies. The pipeline identifies functional motifs in protein sequences of an input proteome using position-specific scoring matrices and hidden Markov models of all conserved domains available in the databases and then statistically score them based on a series of sequence-based features. Subsequently, APRICOT identifies putative RBPs and characterizes them according to functionally relevant structural properties. APRICOT performed better than other existing tools for the sequence-based prediction on the known RBP data sets. The applications and adaptability of the software was demonstrated on several large bacterial RBP data sets including the complete proteome of Salmonella Typhimurium strain SL1344. APRICOT reported 1068 Salmonella proteins as RBP candidates, which were subsequently categorized using the RBDs that have been reported in both eukaryotic and bacterial proteins. A set of 131 strong RBP candidates was selected for experimental confirmation and characterization of RNA-binding activity using RNA co-immunoprecipitation followed by high-throughput sequencing (RIP-Seq) experiments. Based on the relative abundance of transcripts across the RIP-Seq libraries, a catalogue of enriched genes was established for each candidate, which shows the RNA-binding potential of 90% of these proteins. Furthermore, the direct targets of few of these putative RBPs were validated by means of cross-linking and co-immunoprecipitation (CLIP) experiments.
This thesis presents the computational pipeline APRICOT for the global screening of protein primary sequences for potential RBPs in bacteria using RBD information from all kingdoms of life. Furthermore, it provides the first bio-computational resource of putative RBPs in Salmonella, which could now be further studied for their biological and regulatory roles. The command line tool and its documentation are available at https://malvikasharan.github.io/APRICOT/.
Content Delivery Networks (CDNs) are networks that distribute content in the Internet. CDNs are increasingly responsible for the largest share of traffic in the Internet. CDNs distribute popular content to caches in many geographical areas to save bandwidth by avoiding unnecessary multihop retransmission. By bringing the content geographically closer to the user, CDNs also reduce the latency of the services.
Besides end users and content providers, which require high availability of high quality content, CDN providers and Internet Service Providers (ISPs) are interested in an efficient operation of CDNs. In order to ensure an efficient replication of the content, CDN providers have a network of (globally) distributed interconnected datacenters at different points of presence (PoPs). ISPs aim to provide reliable and high speed Internet access. They try to keep the load on the network low and to reduce cost for connectivity with other ISPs.
The increasing number of mobile devices such as smart phones and tablets, high definition video content and high resolution displays result in a continuous growth in mobile traffic. This growth in mobile traffic is further accelerated by newly emerging services, such as mobile live streaming and broadcasting services. The steep increase in mobile traffic is expected to reach by 2018 roughly 60% of total network traffic, the majority of which will be video. To handle the growth in mobile networks, the next generation of 5G mobile networks is designed to have higher access rates and an increased densification of the network infrastructure. With the explosion of access rates and number of base stations the backhaul of wireless networks will become congested.
To reduce the load on the backhaul, the research community suggests installing local caches in gateway routers between the wireless network and the Internet, in base stations of different sizes, and in end-user devices. The local deployment of caches allows keeping the traffic within the ISPs network. The caches are organized in a hierarchy, where caches in the lowest tier are requested first. The request is forwarded to the next tier, if the requested object is not found. Appropriate evaluation methods are required to optimally dimension the caches dependent on the traffic characteristics and the available resources. Additionally methods are necessary that allow performance evaluation of backhaul bandwidth aggregation systems, which further reduce the load on the backhaul.
This thesis analyses CDNs utilizing locally available resources and develops the following evaluations and optimization approaches: Characterization of CDNs and distribution of resources in the Internet, analysis and optimization of hierarchical caching systems with bandwidth constraints and performance evaluation of bandwidth aggregation systems.
This thesis contributes to several issues in the context of SDN and NFV, with an emphasis on performance and management.
The main contributions are guide lines for operators migrating to software-based networks, as well as an analytical model for the packet processing in a Linux system using the Kernel NAPI.
Im Rahmen dieser Arbeit werden die Nebenläufigkeit, Konsistenz und Latenz in asynchronen
Interaktiven Echtzeitsystemen durch die Techniken des Profilings und des Model
Checkings untersucht. Zu Beginn wird erläutert, warum das asynchrone Modell das vielversprechendste
für die Nebenläufigkeit in einem Interaktiven Echtzeitsystem ist. Hierzu
wird ein Vergleich zu anderen Modellen gezogen. Darüber hinaus wird ein detaillierter
Vergleich von Synchronisationstechnologien, welche die Grundlage für Konsistenz
schaffen, durchgeführt. Auf der Grundlage dieser beiden Vergleiche und der Betrachtung
anderer Systeme wird ein Synchronisationskonzept entwickelt.
Auf dieser Basis wird die Nebenläufigkeit, Konsistenz und Latenz mit zwei Verfahren
untersucht. Die erste Technik ist das Profiling, wobei einige neue Darstellungsformen von
gemessenen Daten entwickelt werden. Diese neu entwickelten Darstellungsformen werden
in der Implementierung eines Profilers verwendet. Als zweite Technik wird das Model
Checking analysiert, welches bisher noch nicht im Kontext von Interaktiven Echtzeitsystemen
verwendet wurde. Model Checking dient dazu, die Verhaltensweise eines Interaktiven
Echtzeitsystems vorherzusagen. Diese Vorhersagen werden mit den Messungen aus
dem Profiler verglichen.
Nowadays, data centers are becoming increasingly dynamic due to the common adoption of virtualization technologies. Systems can scale their capacity on demand by growing and shrinking their resources dynamically based on the current load. However, the complexity and performance of modern data centers is influenced not only by the software architecture, middleware, and computing resources, but also by network virtualization, network protocols, network services, and configuration. The field of network virtualization is not as mature as server virtualization and there are multiple competing approaches and technologies. Performance modeling and prediction techniques provide a powerful tool to analyze the performance of modern data centers. However, given the wide variety of network virtualization approaches, no common approach exists for modeling and evaluating the performance of virtualized networks.
The performance community has proposed multiple formalisms and models for evaluating the performance of infrastructures based on different network virtualization technologies. The existing performance models can be divided into two main categories: coarse-grained analytical models and highly-detailed simulation models. Analytical performance models are normally defined at a high level of abstraction and thus they abstract many details of the real network and therefore have limited predictive power. On the other hand, simulation models are normally focused on a selected networking technology and take into account many specific performance influencing factors, resulting in detailed models that are tightly bound to a given technology, infrastructure setup, or to a given protocol stack.
Existing models are inflexible, that means, they provide a single solution method without providing means for the user to influence the solution accuracy and solution overhead. To allow for flexibility in the performance prediction, the user is required to build multiple different performance models obtaining multiple performance predictions. Each performance prediction may then have different focus, different performance metrics, prediction accuracy, and solving time.
The goal of this thesis is to develop a modeling approach that does not require the user to have experience in any of the applied performance modeling formalisms. The approach offers the flexibility in the modeling and analysis by balancing between: (a) generic character and low overhead of coarse-grained analytical models, and (b) the more detailed simulation models with higher prediction accuracy.
The contributions of this thesis intersect with technologies and research areas, such as: software engineering, model-driven software development, domain-specific modeling, performance modeling and prediction, networking and data center networks, network virtualization, Software-Defined Networking (SDN), Network Function Virtualization (NFV). The main contributions of this thesis compose the Descartes Network Infrastructure (DNI) approach and include:
• Novel modeling abstractions for virtualized network infrastructures. This includes two meta-models that define modeling languages for modeling data center network performance. The DNI and miniDNI meta-models provide means for representing network infrastructures at two different abstraction levels. Regardless of which variant of the DNI meta-model is used, the modeling language provides generic modeling elements allowing to describe the majority of existing and future network technologies, while at the same time abstracting factors that have low influence on the overall performance. I focus on SDN and NFV as examples of modern virtualization technologies.
• Network deployment meta-model—an interface between DNI and other meta- models that allows to define mapping between DNI and other descriptive models. The integration with other domain-specific models allows capturing behaviors that are not reflected in the DNI model, for example, software bottlenecks, server virtualization, and middleware overheads.
• Flexible model solving with model transformations. The transformations enable solving a DNI model by transforming it into a predictive model. The model transformations vary in size and complexity depending on the amount of data abstracted in the transformation process and provided to the solver. In this thesis, I contribute six transformations that transform DNI models into various predictive models based on the following modeling formalisms: (a) OMNeT++ simulation, (b) Queueing Petri Nets (QPNs), (c) Layered Queueing Networks (LQNs). For each of these formalisms, multiple predictive models are generated (e.g., models with different level of detail): (a) two for OMNeT++, (b) two for QPNs, (c) two for LQNs. Some predictive models can be solved using multiple alternative solvers resulting in up to ten different automated solving methods for a single DNI model.
• A model extraction method that supports the modeler in the modeling process by automatically prefilling the DNI model with the network traffic data. The contributed traffic profile abstraction and optimization method provides a trade-off by balancing between the size and the level of detail of the extracted profiles.
• A method for selecting feasible solving methods for a DNI model. The method proposes a set of solvers based on trade-off analysis characterizing each transformation with respect to various parameters such as its specific limitations, expected prediction accuracy, expected run-time, required resources in terms of CPU and memory consumption, and scalability.
• An evaluation of the approach in the context of two realistic systems. I evaluate the approach with focus on such factors like: prediction of network capacity and interface throughput, applicability, flexibility in trading-off between prediction accuracy and solving time. Despite not focusing on the maximization of the prediction accuracy, I demonstrate that in the majority of cases, the prediction error is low—up to 20% for uncalibrated models and up to 10% for calibrated models depending on the solving technique.
In summary, this thesis presents the first approach to flexible run-time performance prediction in data center networks, including network based on SDN. It provides ability to flexibly balance between performance prediction accuracy and solving overhead. The approach provides the following key benefits:
• It is possible to predict the impact of changes in the data center network on the performance. The changes include: changes in network topology, hardware configuration, traffic load, and applications deployment.
• DNI can successfully model and predict the performance of multiple different of network infrastructures including proactive SDN scenarios.
• The prediction process is flexible, that is, it provides balance between the granularity of the predictive models and the solving time. The decreased prediction accuracy is usually rewarded with savings of the solving time and consumption of resources required for solving.
• The users are enabled to conduct performance analysis using multiple different prediction methods without requiring the expertise and experience in each of the modeling formalisms.
The components of the DNI approach can be also applied to scenarios that are not considered in this thesis. The approach is generalizable and applicable for the following examples: (a) networks outside of data centers may be analyzed with DNI as long as the background traffic profile is known; (b) uncalibrated DNI models may serve as a basis for design-time performance analysis; (c) the method for extracting and compacting of traffic profiles may be used for other, non-network workloads as well.
Virtualization allows the creation of virtual instances of physical devices, such as network and processing units. In a virtualized system, governed by a hypervisor, resources are shared among virtual machines (VMs). Virtualization has been receiving increasing interest as away to reduce costs through server consolidation and to enhance the flexibility of physical infrastructures. Although virtualization provides many benefits, it introduces new security challenges; that is, the introduction of a hypervisor introduces threats since hypervisors expose new attack surfaces.
Intrusion detection is a common cyber security mechanism whose task is to detect malicious activities in host and/or network environments. This enables timely reaction in order to stop an on-going attack, or to mitigate the impact of a security breach. The wide adoption of virtualization has resulted in the increasingly common practice of deploying conventional intrusion detection systems (IDSs), for example, hardware IDS appliances or common software-based IDSs, in designated VMs as virtual network functions (VNFs). In addition, the research and industrial communities have developed IDSs specifically designed to operate in virtualized environments (i.e., hypervisorbased IDSs), with components both inside the hypervisor and in a designated VM. The latter are becoming increasingly common with the growing proliferation of virtualized data centers and the adoption of the cloud computing paradigm, for which virtualization is as a key enabling technology.
To minimize the risk of security breaches, methods and techniques for evaluating IDSs in an accurate manner are essential. For instance, one may compare different IDSs in terms of their attack detection accuracy in order to identify and deploy the IDS that operates optimally in a given environment, thereby reducing the risks of a security breach. However, methods and techniques for realistic and accurate evaluation of the attack detection accuracy of IDSs in virtualized environments (i.e., IDSs deployed as VNFs or hypervisor-based IDSs) are lacking. That is, workloads that exercise the sensors of an evaluated IDS and contain attacks targeting hypervisors are needed. Attacks targeting hypervisors are of high severity since they may result in, for example, altering the hypervisors’s memory and thus enabling the execution of malicious code with hypervisor privileges. In addition, there are no metrics and measurement methodologies
for accurately quantifying the attack detection accuracy of IDSs in virtualized environments with elastic resource provisioning (i.e., on-demand allocation or deallocation of virtualized hardware resources to VMs). Modern hypervisors allow for hotplugging virtual CPUs and memory on the designated VM where the intrusion detection engine of hypervisor-based IDSs, as well as of IDSs deployed as VNFs, typically operates. Resource hotplugging may have a significant impact on the attack detection accuracy of an evaluated IDS, which is not taken into account by existing metrics for quantifying IDS attack detection accuracy. This may lead to inaccurate measurements, which, in turn, may result in the deployment of misconfigured or ill-performing IDSs, increasing
the risk of security breaches.
This thesis presents contributions that span the standard components of any system
evaluation scenario: workloads, metrics, and measurement methodologies. The scientific contributions of this thesis are:
A comprehensive systematization of the common practices and the state-of-theart on IDS evaluation. This includes: (i) a definition of an IDS evaluation design space allowing to put existing practical and theoretical work into a common context in a systematic manner; (ii) an overview of common practices in IDS evaluation reviewing evaluation approaches and methods related to each part of the design space; (iii) and a set of case studies demonstrating how different IDS evaluation approaches are applied in practice. Given the significant amount of existing practical and theoretical work related to IDS evaluation, the presented systematization is beneficial for improving the general understanding of the topic by providing an overview of the current state of the field. In addition, it is beneficial for identifying and contrasting advantages and disadvantages of different IDS evaluation methods and practices, while also helping to identify specific requirements and best practices for evaluating current and future IDSs.
An in-depth analysis of common vulnerabilities of modern hypervisors as well as a set of attack models capturing the activities of attackers triggering these vulnerabilities. The analysis includes 35 representative vulnerabilities of hypercall handlers (i.e., hypercall vulnerabilities). Hypercalls are software traps from a kernel of a VM to the hypervisor. The hypercall interface of hypervisors, among device drivers and VM exit events, is one of the attack surfaces that hypervisors expose. Triggering a hypercall vulnerability may lead to a crash of the hypervisor or to altering the hypervisor’s memory. We analyze the origins
of the considered hypercall vulnerabilities, demonstrate and analyze possible attacks that trigger them (i.e., hypercall attacks), develop hypercall attack models(i.e., systematized activities of attackers targeting the hypercall interface), and discuss future research directions focusing on approaches for securing hypercall interfaces.
A novel approach for evaluating IDSs enabling the generation of workloads that contain attacks targeting hypervisors, that is, hypercall attacks. We propose an approach for evaluating IDSs using attack injection (i.e., controlled execution of attacks during regular operation of the environment where an IDS under test is deployed). The injection of attacks is performed based on attack models that capture realistic attack scenarios. We use the hypercall attack models developed as part of this thesis for injecting hypercall attacks.
A novel metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning. We demonstrate how the elasticity of resource allocations in such environments may impact the IDS attack detection accuracy and show that using existing metrics in such environments may lead to practically challenging and inaccurate measurements. We also demonstrate the practical use of the metric we propose through a set of case studies, where we evaluate common conventional IDSs deployed as VNFs.
In summary, this thesis presents the first systematization of the state-of-the-art on IDS evaluation, considering workloads, metrics and measurement methodologies as integral parts of every IDS evaluation approach. In addition, we are the first to examine the hypercall attack surface of hypervisors in detail and to propose an approach using attack injection for evaluating IDSs in virtualized environments. Finally, this thesis presents the first metric and measurement methodology for quantifying the attack detection accuracy of IDSs in virtualized environments that feature elastic resource provisioning.
From a technical perspective, as part of the proposed approach for evaluating IDSsthis thesis presents hInjector, a tool for injecting hypercall attacks. We designed hInjector to enable the rigorous, representative, and practically feasible evaluation of IDSs using attack injection. We demonstrate the application and practical usefulness of hInjector, as well as of the proposed approach, by evaluating a representative hypervisor-based IDS designed to detect hypercall attacks. While we focus on evaluating the capabilities of IDSs to detect hypercall attacks, the proposed IDS evaluation approach can be generalized and applied in a broader context. For example, it may be directly used to also evaluate security mechanisms of hypervisors, such as hypercall access control (AC) mechanisms. It may also be applied to evaluate the capabilities
of IDSs to detect attacks involving operations that are functionally similar to hypercalls,
for example, the input/output control (ioctl) calls that the Kernel-based Virtual Machine (KVM) hypervisor supports. For IDSs in virtualized environments featuring elastic resource provisioning, our approach for injecting hypercall attacks can be applied in combination with the attack detection accuracy metric and measurement methodology we propose. Our approach for injecting hypercall attacks, and our metric and measurement methodology, can also be applied independently beyond the scenarios considered in this thesis. The wide spectrum of security mechanisms in virtualized environments whose evaluation can directly benefit from the contributions of this thesis (e.g., hypervisor-based IDSs, IDSs deployed as VNFs, and AC mechanisms) reflects the practical implication of the thesis.
Computer systems have replaced human work-force in many parts of everyday life, but there still exists a large number of tasks that cannot be automated, yet. This also includes tasks, which we consider to be rather simple like the categorization of image content or subjective ratings. Traditionally, these tasks have been completed by designated employees or outsourced to specialized companies. However, recently the crowdsourcing paradigm is more and more applied to complete such human-labor intensive tasks. Crowdsourcing aims at leveraging the huge number of Internet users all around the globe, which form a potentially highly available, low-cost, and easy accessible work-force.
To enable the distribution of work on a global scale, new web-based services emerged, so called crowdsourcing platforms, that act as mediator between employers posting tasks and workers completing tasks. However, the crowdsourcing approach, especially the large anonymous worker crowd, results in two types of challenges. On the one hand, there are technical challenges like the dimensioning of crowdsourcing platform infrastructure or the interconnection of crowdsourcing platforms and machine clouds to build hybrid services. On the other hand, there are conceptual challenges like identifying reliable workers or migrating traditional off-line work to the crowdsourcing environment. To tackle these challenges, this monograph analyzes and models current crowdsourcing systems to optimize crowdsourcing workflows and the underlying infrastructure. First, a categorization of crowdsourcing tasks and platforms is developed to derive generalizable properties. Based on this categorization and an exemplary analysis of a commercial crowdsourcing platform, models for different aspects of crowdsourcing platforms and crowdsourcing mechanisms are developed. A special focus is put on quality assurance mechanisms for crowdsourcing tasks, where the models are used to assess the suitability and costs of existing approaches for different types of tasks. Further, a novel quality assurance mechanism solely based on user-interactions is proposed and its feasibility is shown. The findings from the analysis of existing platforms, the derived models, and the developed quality assurance mechanisms are finally used to derive best practices for two crowdsourcing use-cases, crowdsourcing-based network measurements and crowdsourcing-based subjective user studies. These two exemplary use-cases cover aspects typical for a large range of crowdsourcing tasks and illustrated the potential benefits, but also resulting challenges when using crowdsourcing.
With the ongoing digitalization and globalization of the labor markets, the crowdsourcing paradigm is expected to gain even more importance in the next years. This is already evident in the currently new emerging fields of crowdsourcing, like enterprise crowdsourcing or mobile crowdsourcing. The models developed in the monograph enable platform providers to optimize their current systems and employers to optimize their workflows to increase their commercial success. Moreover, the results help to improve the general understanding of crowdsourcing systems, a key for identifying necessary adaptions and future improvements.
Bei der Durchführung öffentlicher Bauprojekte ist eine intensive Zusammenarbeit zwi¬schen vielen Beteiligten erforderlich: die in der Bauverwaltung des Bauherren angesiedelte Projektleitung, Bedarfsträger (z. B. Universität oder Be¬hörde), Gre-mien des Bauherrn (Kommunal-, Kreis- oder Bundesparlament), dessen Haus-haltsabteilung, Objekt- und Fachplaner (freiberuflich oder als Mitarbeiter der Bauverwaltung), Gutachter, Bauunternehmen, Lieferanten und Dienstleister, Raumordnungs-, Planfeststellungs- und Genehmigungsbehörden. Der Planungs-, Genehmigungs- und Realisationsprozess erstreckt sich meist über mehrere Jahre. Währenddessen ist ein intensiver Informations- und Kommunikationsaustausch zwischen den Beteiligten erforderlich. Baupläne, Leistungsverzeichnisse, Ange-bote, Verträge, Protokolle, Bauzeitenpläne und Rechnungen werden immer noch per E-Mail oder in Papierform ausgetauscht. Wegen der meist größeren Zahl zeit-gleich betreuter Bauprojekte führt dies bei fast allen Beteiligten regelmäßig zu einer herausfordernd großen Korrespondenz und einem als mangelhaft zu be-zeichnenden Überblick über die aktuellen Projektdaten.
Wegen der hochgradigen Interdependenz der Teilprozesse über alle Phasen hin-weg sind aber eine möglichst reibungslose Koordination und die ständige Verfüg-barkeit aktueller Daten bei allen Beteiligten unabdingbare Voraussetzungen, um eine Baumaßnahme zügig und im vorgesehenen Kostenrahmen auszuführen. Während Datenaustausch und Koordination bei großen gewerblichen Bauprojek-ten bereits mit Erfolg durch virtuelle Projekträume unterstützt werden, sind die öffentlichen Bauverwaltungen hier noch zögerlich. Die Erstellung eines einheitli-chen und prozessübergreifenden Datenmodells speziell für die Abläufe öffentli-cher Auftraggeber als Ziel der Arbeit könnte helfen, die Vorteile eines zentralen, für alle Beteiligten zugänglichen Datenbestandes auch für die Bauverwaltungen und ihre Projekte nutzbar zu machen und vormals getrennt gehaltene Datenbe-stände zu einem einzigen zusammenzuführen (Datenintegration). Die gründliche Analyse der Abläufe und Informationsflüsse zwischen den Beteiligten über alle Phasen eines öffentlichen Bauprojekts hinweg sowie eine Bestandsaufnahme der gegenwärtig am Markt verfügbaren virtuellen Projekträume im ersten Teil der Arbeit bilden die Grundlage für die Modellierung der Daten sowie ihrer Zusam-menhänge im zweiten Teil.
Mit der Gesamtdarstellung der Beteiligten, ihrer Rollen und Aufgaben, der Do-kumente und der zugehörigen Metadaten über alle Phasen und Baufachbereiche hinweg wurde ein neuer Forschungsbeitrag erarbeitet. Die unterschiedlichen Be-zeichnungen z. B. in Hoch- und Tiefbauprojekten wurden im Interesse der Ver-ständlichkeit erhalten, aber in einer gemeinsamen Struktur zusammengeführt. Diese Modellierung ist die Voraussetzung für eine verbesserte informationstech-nische Unterstützung öffentlicher Bauprojekte und zugleich die ureigenste Aufga-be des Wirtschaftsinformatikers als Mittler zwischen Anwendern und Entwick-lern.
Das in dieser Arbeit entwickelte Datenmodell erlaubt wegen seiner verwaltungs- und baufachbereichsübergreifenden Konzeption im Sinne eines Referenzmodells den Einsatz als Basis einer Standardanwendungssoftware, die mit geringem An-passungsaufwand bei einer großen Zahl an Kunden im öffentlichen Bereich einge-setzt werden kann. Beispiele sind Projektraumanwendungen sowie Workflow-Management-Systeme. Es ist zugleich ein Referenzvorschlag an die Entwickler bestehender Anwendungen zur Definition von Schnittstellen und schließlich zur Umsetzung applikationsübergreifender Integrationsansätze.