000 Informatik, Informationswissenschaft, allgemeine Werke
Refine
Has Fulltext
- yes (126)
Year of publication
Document Type
- Doctoral Thesis (85)
- Journal article (20)
- Working Paper (5)
- Book (3)
- Jahresbericht (2)
- Bachelor Thesis (2)
- Conference Proceeding (2)
- Master Thesis (2)
- Report (2)
- Book article / Book chapter (1)
Keywords
- Leistungsbewertung (12)
- Cloud Computing (7)
- Quality of Experience (7)
- Data Mining (5)
- Netzwerk (5)
- Maschinelles Lernen (4)
- Mensch-Maschine-Kommunikation (4)
- Modellierung (4)
- Telekommunikationsnetz (4)
- Universität (4)
Institute
- Institut für Informatik (95)
- Betriebswirtschaftliches Institut (9)
- Graduate School of Science and Technology (7)
- Theodor-Boveri-Institut für Biowissenschaften (4)
- Graduate School of Life Sciences (3)
- Institut für Molekulare Infektionsbiologie (3)
- Universitätsbibliothek (3)
- Institut Mensch - Computer - Medien (2)
- Universität Würzburg (2)
- Deutsches Zentrum für Herzinsuffizienz (DZHI) (1)
Sonstige beteiligte Institutionen
- Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Raumfahrtsysteme (2)
- Siemens AG (2)
- Technische Hochschule Nürnberg Georg Simon Ohm (2)
- Beuth Hochschule für Technik Berlin (1)
- Hochschule Wismar (1)
- University of Applied Sciences and Arts Western Switzerland, Fribourg (1)
- University of Duisburg-Essen (1)
- Zentrum für Telematik e.V. (1)
EU-Project number / Contract (GA) number
- 320377 (1)
This work in the field of digital literary stylistics and computational literary studies is concerned with theoretical concerns of literary genre, with the design of a corpus of nineteenth-century Spanish-American novels, and with its empirical analysis in terms of subgenres of the novel. The digital text corpus consists of 256 Argentine, Cuban, and Mexican novels from the period between 1830 and 1910. It has been created with the goal to analyze thematic subgenres and literary currents that were represented in numerous novels in the nineteenth century by means of computational text categorization methods. The texts have been gathered from different sources, encoded in the standard of the Text Encoding Initiative (TEI), and enriched with detailed bibliographic and subgenre-related metadata, as well as with structural information.
To categorize the texts, statistical classification and a family resemblance analysis relying on network analysis are used with the aim to examine how the subgenres, which are understood as communicative, conventional phenomena, can be captured on the stylistic, textual level of the novels that participate in them. The result is that both thematic subgenres and literary currents are textually coherent to degrees of 70–90 %, depending on the individual subgenre constellation, meaning that the communicatively established subgenre classifications can be accurately captured to this extent in terms of textually defined classes.
Besides the empirical focus, the dissertation also aims to relate literary theoretical genre concepts to the ones used in digital genre stylistics and computational literary studies as subfields of digital humanities. It is argued that literary text types, conventional literary genres, and textual literary genres should be distinguished on a theoretical level to improve the conceptualization of genre for digital text analysis.
Environmental issues have emerged especially since humans burned fossil fuels, which led to air pollution and climate change that harm the environment. These issues’ substantial consequences evoked strong efforts towards assessing the state of our environment.
Various environmental machine learning (ML) tasks aid these efforts. These tasks concern environmental data but are common ML tasks otherwise, i.e., datasets are split (training, validatition, test), hyperparameters are optimized on validation data, and test set metrics measure a model’s generalizability. This work focuses on the following environmental ML tasks: Regarding air pollution, land use regression (LUR) estimates air pollutant concentrations at locations where no measurements are available based on measured locations and each location’s land use (e.g., industry, streets). For LUR, this work uses data from London (modeled) and Zurich (measured). Concerning climate change, a common ML task is model output statistics (MOS), where a climate model’s output for a study area is altered to better fit Earth observations and provide more accurate climate data. This work uses the regional climate model (RCM) REMO and Earth observations from the E-OBS dataset for MOS. Another task regarding climate is grain size distribution interpolation where soil properties at locations without measurements are estimated based on the few measured locations. This can provide climate models with soil information, that is important for hydrology. For this task, data from Lower Franconia is used.
Such environmental ML tasks commonly have a number of properties: (i) geospatiality, i.e., their data refers to locations relative to the Earth’s surface. (ii) The environmental variables to estimate or predict are usually continuous. (iii) Data can be imbalanced due to relatively rare extreme events (e.g., extreme precipitation). (iv) Multiple related potential target variables can be available per location, since measurement devices often contain different sensors. (v) Labels are spatially often only sparsely available since conducting measurements at all locations of interest is usually infeasible. These properties present challenges but also opportunities when designing ML methods for such tasks.
In the past, environmental ML tasks have been tackled with conventional ML methods, such as linear regression or random forests (RFs). However, the field of ML has made tremendous leaps beyond these classic models through deep learning (DL). In DL, models use multiple layers of neurons, producing increasingly higher-level feature representations with growing layer depth. DL has made previously infeasible ML tasks feasible, improved the performance for many tasks in comparison to existing ML models significantly, and eliminated the need for manual feature engineering in some domains due to its ability to learn features from raw data. To harness these advantages for environmental domains it is promising to develop novel DL methods for environmental ML tasks.
This thesis presents methods for dealing with special challenges and exploiting opportunities inherent to environmental ML tasks in conjunction with DL. To this end, the proposed methods explore the following techniques: (i) Convolutions as in convolutional neural networks (CNNs) to exploit reoccurring spatial patterns in geospatial data. (ii) Posing the problems as regression tasks to estimate the continuous variables. (iii) Density-based weighting to improve estimation performance for rare and extreme events. (iv) Multi-task learning to make use of multiple related target variables. (v) Semi–supervised learning to cope with label sparsity. Using these techniques, this thesis considers four research questions: (i) Can air pollution be estimated without manual feature engineering? This is answered positively by the introduction of the CNN-based LUR model MapLUR as well as the off-the-shelf LUR solution OpenLUR. (ii) Can colocated pollution data improve spatial air pollution models? Multi-task learning for LUR is developed for this, showing potential for improvements with colocated data. (iii) Can DL models improve the quality of climate model outputs? The proposed DL climate MOS architecture ConvMOS demonstrates this. Additionally, semi-supervised training of multilayer perceptrons (MLPs) for grain size distribution interpolation is presented, which can provide improved input data. (iv) Can DL models be taught to better estimate climate extremes? To this end, density-based weighting for imbalanced regression (DenseLoss) is proposed and applied to the DL architecture ConvMOS, improving climate extremes estimation. These methods show how especially DL techniques can be developed for environmental ML tasks with their special characteristics in mind. This allows for better models than previously possible with conventional ML, leading to more accurate assessment and better understanding of the state of our environment.
In produzierenden Unternehmen werden verschiedene Vorgehensweisen zur Planung, Überwachung und Steuerung von Produktionsabläufen eingesetzt. Einer dieser Methoden wird als Vorgangsknotennetzplantechnik bezeichnet. Die einzelnen Produktionsschritte werden als Knoten definiert und durch Pfeile miteinander verbunden. Die Pfeile stellen die Beziehungen der jeweiligen Vorgänge zueinander und damit den Produktionsablauf dar. Diese Technik erlaubt den Anwendern einen umfassenden Überblick über die einzelnen Prozessrelationen. Zusätzlich können mit ihr Vorgangszeiten und Produktfertigstellungszeiten ermittelt werden, wodurch eine ausführliche Planung der Produktion ermöglicht wird. Ein Nachteil dieser Technik begründet sich in der alleinigen Darstellung einer ausführbaren Prozessabfolge. Im Falle eines Störungseintritts mit der Folge eines nicht durchführbaren Vorgangs muss von dem originären Prozess abgewichen werden. Aufgrund dessen wird eine Neuplanung erforderlich. Es werden Alternativen für den gestörten Vorgang benötigt, um eine Fortführung des Prozesses ungeachtet der Störung zu erreichen. Innerhalb dieser Arbeit wird daher eine Erweiterung der Vorgangsknotennetzplantechnik beschrieben, die es erlaubt, ergänzend zu dem geplanten Soll-Prozess Alternativvorgänge für einzelne Vorgänge darzulegen. Diese Methode wird als Maximalnetzplan bezeichnet. Die Alternativen werden im Falle eines Störungseintritts automatisch evaluiert und dem Anwender in priorisierter Reihenfolge präsentiert. Durch die Verwendung des Maximalnetzplans kann eine aufwendige Neuplanung vermieden werden. Als Anwendungsbeispiel dient ein Montageprozess, mithilfe dessen die Verwendbarkeit der Methode dargelegt wird. Weiterführend zeigt eine zeitliche Analyse zufallsbedingter Maximalnetzpläne eine Begründung zur Durchführung von Alternativen und damit den Nutzen des Maximalnetzplans auf. Zusätzlich sei angemerkt, dass innerhalb dieser Arbeit verwendete Begrifflichkeiten wie Anwender, Werker oder Mitarbeiter in maskuliner Schreibweise niedergeschrieben werden. Dieses ist ausschließlich der Einfachheit geschuldet und nicht dem Zweck der Diskriminierung anderer Geschlechter dienlich. Die verwendete Schreibweise soll alle Geschlechter ansprechen, ob männlich, weiblich oder divers.
Serverless computing is an emerging cloud computing paradigm that offers a highlevel
application programming model with utilization-based billing. It enables the
deployment of cloud applications without managing the underlying resources or
worrying about other operational aspects. Function-as-a-Service (FaaS) platforms
implement serverless computing by allowing developers to execute code on-demand
in response to events with continuous scaling while having to pay only for the
time used with sub-second metering. Cloud providers have further introduced
many fully managed services for databases, messaging buses, and storage that also
implement a serverless computing model. Applications composed of these fully
managed services and FaaS functions are quickly gaining popularity in both industry
and in academia.
However, due to this rapid adoption, much information surrounding serverless
computing is inconsistent and often outdated as the serverless paradigm evolves.
This makes the performance engineering of serverless applications and platforms
challenging, as there are many open questions, such as: What types of applications
is serverless computing well suited for, and what are its limitations? How should
serverless applications be designed, configured, and implemented? Which design
decisions impact the performance properties of serverless platforms and how can
they be optimized? These and many other open questions can be traced back to an
inconsistent understanding of serverless applications and platforms, which could
present a major roadblock in the adoption of serverless computing.
In this thesis, we address the lack of performance knowledge surrounding serverless
applications and platforms from multiple angles: we conduct empirical studies
to further the understanding of serverless applications and platforms, we introduce
automated optimization methods that simplify the operation of serverless applications,
and we enable the analysis of design tradeoffs of serverless platforms by
extending white-box performance modeling.
Since the advent of high-throughput sequencing technologies in the mid-2010s, RNA se-
quencing (RNA-seq) has been established as the method of choice for studying gene
expression. In comparison to microarray-based methods, which have mainly been used to
study gene expression before the rise of RNA-seq, RNA-seq is able to profile the entire
transcriptome of an organism without the need to predefine genes of interest. Today,
a wide variety of RNA-seq methods and protocols exist, including dual RNA sequenc-
ing (dual RNA-seq) and multi RNA sequencing (multi RNA-seq). Dual RNA-seq and
multi RNA-seq simultaneously investigate the transcriptomes of two or more species, re-
spectively. Therefore, the total RNA of all interacting species is sequenced together and
only separated in silico. Compared to conventional RNA-seq, which can only investi-
gate one species at a time, dual RNA-seq and multi RNA-seq analyses can connect the
transcriptome changes of the species being investigated and thus give a clearer picture of
the interspecies interactions. Dual RNA-seq and multi RNA-seq have been applied to a
variety of host-pathogen, mutualistic and commensal interaction systems.
We applied dual RNA-seq to a host-pathogen system of human mast cells and Staphylo-
coccus aureus (S. aureus). S. aureus, a commensal gram-positive bacterium, can become
an opportunistic pathogen and infect skin lesions of atopic dermatitis (AD) patients.
Among the first immune cells S. aureus encounters are mast cells, which have previously
been shown to be able to kill the bacteria by discharging antimicrobial products and re-
leasing extracellular traps made of protein and deoxyribonucleic acid (DNA). However,
S. aureus is known to evade the host’s immune response by internalizing within mast
cells. Our dual RNA-seq analysis of different infection settings revealed that mast cells
and S. aureus need physical contact to influence each other’s gene expression. We could
show that S. aureus cells internalizing within mast cells undergo profound transcriptome
changes to adjust their metabolism to survive in the intracellular niche. On the host side,
we found out that infected mast cells elicit a type-I interferon (IFN-I) response in an
autocrine manner and in a paracrine manner to non-infected bystander-cells. Our study
provides the first evidence that mast cells are capable to produce IFN-I upon infection
with a bacterial pathogen.
Detecting anomalies in transaction data is an important task with a high potential to avoid financial loss due to irregularities deliberately or inadvertently carried out, such as credit card fraud, occupational fraud in companies or ordering and accounting errors. With ongoing digitization of our world, data-driven approaches, including machine learning, can draw benefit from data with less manual effort and feature engineering. A large variety of machine learning-based anomaly detection methods approach this by learning a precise model of normality from which anomalies can be distinguished. Modeling normality in transactional data, however, requires to capture distributions and dependencies within the data precisely with special attention to numerical dependencies such as quantities, prices or amounts.
To implicitly model numerical dependencies, Neural Arithmetic Logic Units have been proposed as neural architecture. In practice, however, these have stability and precision issues.
Therefore, we first develop an improved neural network architecture, iNALU, which is designed to better model numerical dependencies as found in transaction data. We compare this architecture to the previous approach and show in several experiments of varying complexity that our novel architecture provides better precision and stability.
We integrate this architecture into two generative neural network models adapted for transaction data and investigate how well normal behavior is modeled. We show that both architectures can successfully model normal transaction data, with our neural architecture improving generative performance for one model.
Since categorical and numerical variables are common in transaction data, but many machine learning methods only process numerical representations, we explore different representation learning techniques to transform categorical transaction data into dense numerical vectors. We extend this approach by proposing an outlier-aware discretization, thus incorporating numerical attributes into the computation of categorical embeddings, and investigate latent spaces, as well as quantitative performance for anomaly detection.
Next, we evaluate different scenarios for anomaly detection on transaction data. We extend our iNALU architecture to a neural layer that can model both numerical and non-numerical dependencies and evaluate it in a supervised and one-class setting. We investigate the stability and generalizability of our approach and show that it outperforms a variety of models in the balanced supervised setting and performs comparably in the one-class setting. Finally, we evaluate three approaches to using a generative model as an anomaly detector and compare the anomaly detection performance.
Latency is an inherent problem of computing systems. Each computation takes time until the result is available. Virtual reality systems use elaborated computer resources to create virtual experiences. The latency of those systems is often ignored or assumed as small enough to provide a good experience.
This cumulative thesis is comprised of published peer reviewed research papers exploring the behaviour and effects of latency. Contrary to the common description of time invariant latency, latency is shown to fluctuate. Few other researchers have looked into this time variant behaviour. This thesis explores time variant latency with a focus on randomly occurring latency spikes. Latency spikes are observed both for small algorithms and as end to end latency in complete virtual reality systems. Most latency measurements gather close to the mean latency with potentially multiple smaller clusters of larger latency values and rare extreme outliers. The latency behaviour differs for different implementations of an algorithm. Operating system schedulers and programming language environments such as garbage collectors contribute to the overall latency behaviour. The thesis demonstrates these influences on the example of different implementations of message passing.
The plethora of latency sources result in an unpredictable latency behaviour. Measuring and reporting it in scientific experiments is important. This thesis describes established approaches to measuring latency and proposes an enhanced setup to gather detailed information. The thesis proposes to dissect the measured data with a stacked z-outlier-test to separate the clusters of latency measurements for better reporting.
Latency in virtual reality applications can degrade the experience in multiple ways. The thesis focuses on cybersickness as a major detrimental effect. An approach to simulate time variant latency is proposed to make latency available as an independent variable in experiments to understand latency's effects. An experiment with modified latency shows that latency spikes can contribute to cybersickness. A review of related research shows that different time invariant latency behaviour also contributes to cybersickness.
The digital transformation facilitates new forms of collaboration between companies along the supply chain and between companies and consumers. Besides sharing information on centralized platforms, blockchain technology is often regarded as a potential basis for this kind of collaboration. However, there is much hype surrounding the technology due to the rising popularity of cryptocurrencies, decentralized finance (DeFi), and non-fungible tokens (NFTs). This leads to potential issues being overlooked. Therefore, this thesis aims to investigate, highlight, and address the current weaknesses of blockchain technology: Inefficient consensus, privacy, smart contract security, and scalability.
First, to provide a foundation, the four key challenges are introduced, and the research objectives are defined, followed by a brief presentation of the preliminary work for this thesis.
The following four parts highlight the four main problem areas of blockchain. Using big data analytics, we extracted and analyzed the blockchain data of six major blockchains to identify potential weaknesses in their consensus algorithm. To improve smart contract security, we classified smart contract functionalities to identify similarities in structure and design. The resulting taxonomy serves as a basis for future standardization efforts for security-relevant features, such as safe math functions and oracle services. To challenge privacy assumptions, we researched consortium blockchains from an adversary role. We chose four blockchains with misconfigured nodes and extracted as much information from those nodes as possible. Finally, we compared scalability solutions for blockchain applications and developed a decision process that serves as a guideline to improve the scalability of their applications.
Building on the scalability framework, we showcase three potential applications for blockchain technology. First, we develop a token-based approach for inter-company value stream mapping. By only relying on simple tokens instead of complex smart-contracts, the computational load on the network is expected to be much lower compared to other solutions. The following two solutions use offloading transactions and computations from the main blockchain. The first approach uses secure multiparty computation to offload the matching of supply and demand for manufacturing capacities to a trustless network. The transaction is written to the main blockchain only after the match is made. The second approach uses the concept of payment channel networks to enable high-frequency bidirectional micropayments for WiFi sharing. The host gets paid for every second of data usage through an off-chain channel. The full payment is only written to the blockchain after the connection to the client gets terminated.
Finally, the thesis concludes by briefly summarizing and discussing the results and providing avenues for further research.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.
With the miniaturization of satellites a fundamental change took place in the space industry. Instead of single big monolithic satellites nowadays more and more systems are envisaged consisting of a number of small satellites to form cooperating systems in space. The lower costs for development and launch as well as the spatial distribution of these systems enable the implementation of new scientific missions and commercial services.
With this paradigm shift new challenges constantly emerge for satellite developers, particularly in the area of wireless communication systems and network protocols.
Satellites in low Earth orbits and ground stations form dynamic space-terrestrial networks. The characteristics of these networks differ fundamentally from those of other networks.
The resulting challenges with regard to communication system design, system analysis, packet forwarding, routing and medium access control as well as challenges concerning the reliability and efficiency of wireless communication links are addressed in this thesis.
The physical modeling of space-terrestrial networks is addressed by analyzing existing satellite systems and communication devices, by evaluating measurements and by implementing a simulator for space-terrestrial networks.
The resulting system and channel models were used as a basis for the prediction of the dynamic network topologies, link properties and channel interference. These predictions allowed for the implementation of efficient routing and medium access control schemes for space-terrestrial networks. Further, the implementation and utilization of software-defined ground stations is addressed, and a data upload scheme for the operation of small satellite formations is presented.