@phdthesis{Zuefle2022, author = {Z{\"u}fle, Marwin Otto}, title = {Proactive Critical Event Prediction based on Monitoring Data with Focus on Technical Systems}, doi = {10.25972/OPUS-25575}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-255757}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2022}, abstract = {The importance of proactive and timely prediction of critical events is steadily increasing, whether in the manufacturing industry or in private life. In the past, machines in the manufacturing industry were often maintained based on a regular schedule or threshold violations, which is no longer competitive as it causes unnecessary costs and downtime. In contrast, the predictions of critical events in everyday life are often much more concealed and hardly noticeable to the private individual, unless the critical event occurs. For instance, our electricity provider has to ensure that we, as end users, are always supplied with sufficient electricity, or our favorite streaming service has to guarantee that we can watch our favorite series without interruptions. For this purpose, they have to constantly analyze what the current situation is, how it will develop in the near future, and how they have to react in order to cope with future conditions without causing power outages or video stalling. In order to analyze the performance of a system, monitoring mechanisms are often integrated to observe characteristics that describe the workload and the state of the system and its environment. Reactive systems typically employ thresholds, utility functions, or models to determine the current state of the system. However, such reactive systems cannot proactively estimate future events, but only as they occur. In the case of critical events, reactive determination of the current system state is futile, whereas a proactive system could have predicted this event in advance and enabled timely countermeasures. To achieve proactivity, the system requires estimates of future system states. Given the gap between design time and runtime, it is typically not possible to use expert knowledge to a priori model all situations a system might encounter at runtime. Therefore, prediction methods must be integrated into the system. Depending on the available monitoring data and the complexity of the prediction task, either time series forecasting in combination with thresholding or more sophisticated machine and deep learning models have to be trained. Although numerous forecasting methods have been proposed in the literature, these methods have their advantages and disadvantages depending on the characteristics of the time series under consideration. Therefore, expert knowledge is required to decide which forecasting method to choose. However, since the time series observed at runtime cannot be known at design time, such expert knowledge cannot be implemented in the system. In addition to selecting an appropriate forecasting method, several time series preprocessing steps are required to achieve satisfactory forecasting accuracy. In the literature, this preprocessing is often done manually, which is not practical for autonomous computing systems, such as Self-Aware Computing Systems. Several approaches have also been presented in the literature for predicting critical events based on multivariate monitoring data using machine and deep learning. However, these approaches are typically highly domain-specific, such as financial failures, bearing failures, or product failures. Therefore, they require in-depth expert knowledge. For this reason, these approaches cannot be fully automated and are not transferable to other use cases. Thus, the literature lacks generalizable end-to-end workflows for modeling, detecting, and predicting failures that require only little expert knowledge. To overcome these shortcomings, this thesis presents a system model for meta-self-aware prediction of critical events based on the LRA-M loop of Self-Aware Computing Systems. Building upon this system model, this thesis provides six further contributions to critical event prediction. While the first two contributions address critical event prediction based on univariate data via time series forecasting, the three subsequent contributions address critical event prediction for multivariate monitoring data using machine and deep learning algorithms. Finally, the last contribution addresses the update procedure of the system model. Specifically, the seven main contributions of this thesis can be summarized as follows: First, we present a system model for meta self-aware prediction of critical events. To handle both univariate and multivariate monitoring data, it offers univariate time series forecasting for use cases where a single observed variable is representative of the state of the system, and machine learning algorithms combined with various preprocessing techniques for use cases where a large number of variables are observed to characterize the system's state. However, the two different modeling alternatives are not disjoint, as univariate time series forecasts can also be included to estimate future monitoring data as additional input to the machine learning models. Finally, a feedback loop is incorporated to monitor the achieved prediction quality and trigger model updates. We propose a novel hybrid time series forecasting method for univariate, seasonal time series, called Telescope. To this end, Telescope automatically preprocesses the time series, performs a kind of divide-and-conquer technique to split the time series into multiple components, and derives additional categorical information. It then forecasts the components and categorical information separately using a specific state-of-the-art method for each component. Finally, Telescope recombines the individual predictions. As Telescope performs both preprocessing and forecasting automatically, it represents a complete end-to-end approach to univariate seasonal time series forecasting. Experimental results show that Telescope achieves enhanced forecast accuracy, more reliable forecasts, and a substantial speedup. Furthermore, we apply Telescope to the scenario of predicting critical events for virtual machine auto-scaling. Here, results show that Telescope considerably reduces the average response time and significantly reduces the number of service level objective violations. For the automatic selection of a suitable forecasting method, we introduce two frameworks for recommending forecasting methods. The first framework extracts various time series characteristics to learn the relationship between them and forecast accuracy. In contrast, the other framework divides the historical observations into internal training and validation parts to estimate the most appropriate forecasting method. Moreover, this framework also includes time series preprocessing steps. Comparisons between the proposed forecasting method recommendation frameworks and the individual state-of-the-art forecasting methods and the state-of-the-art forecasting method recommendation approach show that the proposed frameworks considerably improve the forecast accuracy. With regard to multivariate monitoring data, we first present an end-to-end workflow to detect critical events in technical systems in the form of anomalous machine states. The end-to-end design includes raw data processing, phase segmentation, data resampling, feature extraction, and machine tool anomaly detection. In addition, the workflow does not rely on profound domain knowledge or specific monitoring variables, but merely assumes standard machine monitoring data. We evaluate the end-to-end workflow using data from a real CNC machine. The results indicate that conventional frequency analysis does not detect the critical machine conditions well, while our workflow detects the critical events very well with an F1-score of almost 91\%. To predict critical events rather than merely detecting them, we compare different modeling alternatives for critical event prediction in the use case of time-to-failure prediction of hard disk drives. Given that failure records are typically significantly less frequent than instances representing the normal state, we employ different oversampling strategies. Next, we compare the prediction quality of binary class modeling with downscaled multi-class modeling. Furthermore, we integrate univariate time series forecasting into the feature generation process to estimate future monitoring data. Finally, we model the time-to-failure using not only classification models but also regression models. The results suggest that multi-class modeling provides the overall best prediction quality with respect to practical requirements. In addition, we prove that forecasting the features of the prediction model significantly improves the critical event prediction quality. We propose an end-to-end workflow for predicting critical events of industrial machines. Again, this approach does not rely on expert knowledge except for the definition of monitoring data, and therefore represents a generalizable workflow for predicting critical events of industrial machines. The workflow includes feature extraction, feature handling, target class mapping, and model learning with integrated hyperparameter tuning via a grid-search technique. Drawing on the result of the previous contribution, the workflow models the time-to-failure prediction in terms of multiple classes, where we compare different labeling strategies for multi-class classification. The evaluation using real-world production data of an industrial press demonstrates that the workflow is capable of predicting six different time-to-failure windows with a macro F1-score of 90\%. When scaling the time-to-failure classes down to a binary prediction of critical events, the F1-score increases to above 98\%. Finally, we present four update triggers to assess when critical event prediction models should be re-trained during on-line application. Such re-training is required, for instance, due to concept drift. The update triggers introduced in this thesis take into account the elapsed time since the last update, the prediction quality achieved on the current test data, and the prediction quality achieved on the preceding test data. We compare the different update strategies with each other and with the static baseline model. The results demonstrate the necessity of model updates during on-line application and suggest that the update triggers that consider both the prediction quality of the current and preceding test data achieve the best trade-off between prediction quality and number of updates required. We are convinced that the contributions of this thesis constitute significant impulses for the academic research community as well as for practitioners. First of all, to the best of our knowledge, we are the first to propose a fully automated, end-to-end, hybrid, component-based forecasting method for seasonal time series that also includes time series preprocessing. Due to the combination of reliably high forecast accuracy and reliably low time-to-result, it offers many new opportunities in applications requiring accurate forecasts within a fixed time period in order to take timely countermeasures. In addition, the promising results of the forecasting method recommendation systems provide new opportunities to enhance forecasting performance for all types of time series, not just seasonal ones. Furthermore, we are the first to expose the deficiencies of the prior state-of-the-art forecasting method recommendation system. Concerning the contributions to critical event prediction based on multivariate monitoring data, we have already collaborated closely with industrial partners, which supports the practical relevance of the contributions of this thesis. The automated end-to-end design of the proposed workflows that do not demand profound domain or expert knowledge represents a milestone in bridging the gap between academic theory and industrial application. Finally, the workflow for predicting critical events in industrial machines is currently being operationalized in a real production system, underscoring the practical impact of this thesis.}, subject = {Prognose}, language = {en} } @phdthesis{Zinner2012, author = {Zinner, Thomas}, title = {Performance Modeling of QoE-Aware Multipath Video Transmission in the Future Internet}, doi = {10.25972/OPUS-6106}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-72324}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2012}, abstract = {Internet applications are becoming more and more flexible to support diverge user demands and network conditions. This is reflected by technical concepts, which provide new adaptation mechanisms to allow fine grained adjustment of the application quality and the corresponding bandwidth requirements. For the case of video streaming, the scalable video codec H.264/SVC allows the flexible adaptation of frame rate, video resolution and image quality with respect to the available network resources. In order to guarantee a good user-perceived quality (Quality of Experience, QoE) it is necessary to adjust and optimize the video quality accurately. But not only have the applications of the current Internet changed. Within network and transport, new technologies evolved during the last years providing a more flexible and efficient usage of data transport and network resources. One of the most promising technologies is Network Virtualization (NV) which is seen as an enabler to overcome the ossification of the Internet stack. It provides means to simultaneously operate multiple logical networks which allow for example application-specific addressing, naming and routing, or their individual resource management. New transport mechanisms like multipath transmission on the network and transport layer aim at an efficient usage of available transport resources. However, the simultaneous transmission of data via heterogeneous transport paths and communication technologies inevitably introduces packet reordering. Additional mechanisms and buffers are required to restore the correct packet order and thus to prevent a disturbance of the data transport. A proper buffer dimensioning as well as the classification of the impact of varying path characteristics like bandwidth and delay require appropriate evaluation methods. Additionally, for a path selection mechanism real time evaluation mechanisms are needed. A better application-network interaction and the corresponding exchange of information enable an efficient adaptation of the application to the network conditions and vice versa. This PhD thesis analyzes a video streaming architecture utilizing multipath transmission and scalable video coding and develops the following optimization possibilities and results: Analysis and dimensioning methods for multipath transmission, quantification of the adaptation possibilities to the current network conditions with respect to the QoE for H.264/SVC, and evaluation and optimization of a future video streaming architecture, which allows a better interaction of application and network.}, subject = {Video{\"u}bertragung}, language = {en} } @phdthesis{Zink2024, author = {Zink, Johannes}, title = {Algorithms for Drawing Graphs and Polylines with Straight-Line Segments}, doi = {10.25972/OPUS-35475}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-354756}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2024}, abstract = {Graphs provide a key means to model relationships between entities. They consist of vertices representing the entities, and edges representing relationships between pairs of entities. To make people conceive the structure of a graph, it is almost inevitable to visualize the graph. We call such a visualization a graph drawing. Moreover, we have a straight-line graph drawing if each vertex is represented as a point (or a small geometric object, e.g., a rectangle) and each edge is represented as a line segment between its two vertices. A polyline is a very simple straight-line graph drawing, where the vertices form a sequence according to which the vertices are connected by edges. An example of a polyline in practice is a GPS trajectory. The underlying road network, in turn, can be modeled as a graph. This book addresses problems that arise when working with straight-line graph drawings and polylines. In particular, we study algorithms for recognizing certain graphs representable with line segments, for generating straight-line graph drawings, and for abstracting polylines. In the first part, we first examine, how and in which time we can decide whether a given graph is a stick graph, that is, whether its vertices can be represented as vertical and horizontal line segments on a diagonal line, which intersect if and only if there is an edge between them. We then consider the visual complexity of graphs. Specifically, we investigate, for certain classes of graphs, how many line segments are necessary for any straight-line graph drawing, and whether three (or more) different slopes of the line segments are sufficient to draw all edges. Last, we study the question, how to assign (ordered) colors to the vertices of a graph with both directed and undirected edges such that no neighboring vertices get the same color and colors are ascending along directed edges. Here, the special property of the considered graph is that the vertices can be represented as intervals that overlap if and only if there is an edge between them. The latter problem is motivated by an application in automated drawing of cable plans with vertical and horizontal line segments, which we cover in the second part. We describe an algorithm that gets the abstract description of a cable plan as input, and generates a drawing that takes into account the special properties of these cable plans, like plugs and groups of wires. We then experimentally evaluate the quality of the resulting drawings. In the third part, we study the problem of abstracting (or simplifying) a single polyline and a bundle of polylines. In this problem, the objective is to remove as many vertices as possible from the given polyline(s) while keeping each resulting polyline sufficiently similar to its original course (according to a given similarity measure).}, subject = {Graphenzeichnen}, language = {en} } @article{ZimmererFischbachLatoschik2018, author = {Zimmerer, Chris and Fischbach, Martin and Latoschik, Marc Erich}, title = {Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks}, series = {Multimodal Technologies and Interaction}, volume = {2}, journal = {Multimodal Technologies and Interaction}, number = {4}, issn = {2414-4088}, doi = {10.3390/mti2040081}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-197573}, year = {2018}, abstract = {Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept's feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.}, language = {en} } @phdthesis{Zhai2010, author = {Zhai, Xiaomin}, title = {Design, Development and Evaluation of a Virtual Classroom and Teaching Contents for Bernoulli Stochastics}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-56106}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2010}, abstract = {This thesis is devoted to Bernoulli Stochastics, which was initiated by Jakob Bernoulli more than 300 years ago by his master piece 'Ars conjectandi', which can be translated as 'Science of Prediction'. Thus, Jakob Bernoulli's Stochastics focus on prediction in contrast to the later emerging disciplines probability theory, statistics and mathematical statistics. Only recently Jakob Bernoulli's focus was taken up von Collani, who developed a unified theory of uncertainty aiming at making reliable and accurate predictions. In this thesis, teaching material as well as a virtual classroom are developed for fostering ideas and techniques initiated by Jakob Bernoulli and elaborated by Elart von Collani. The thesis is part of an extensively construed project called 'Stochastikon' aiming at introducing Bernoulli Stochastics as a unified science of prediction and measurement under uncertainty. This ambitious aim shall be reached by the development of an internet-based comprehensive system offering the science of Bernoulli Stochastics on any level of application. So far it is planned that the 'Stochastikon' system (http://www.stochastikon.com/) will consist of five subsystems. Two of them are developed and introduced in this thesis. The first one is the e-learning programme 'Stochastikon Magister' and the second one 'Stochastikon Graphics' that provides the entire Stochastikon system with graphical illustrations. E-learning is the outcome of merging education and internet techniques. E-learning is characterized by the facts that teaching and learning are independent of place and time and of the availability of specially trained teachers. Knowledge offering as well as knowledge transferring are realized by using modern information technologies. Nowadays more and more e-learning environments are based on the internet as the primary tool for communication and presentation. E-learning presentation tools are for instance text-files, pictures, graphics, audio and videos, which can be networked with each other. There could be no limit as to the access to teaching contents. Moreover, the students can adapt the speed of learning to their individual abilities. E-learning is particularly appropriate for newly arising scientific and technical disciplines, which generally cannot be presented by traditional learning methods sufficiently well, because neither trained teachers nor textbooks are available. The first part of this dissertation introduces the state of the art of e-learning in statistics, since statistics and Bernoulli Stochastics are both based on probability theory and exhibit many similar features. Since Stochastikon Magister is the first e-learning programme for Bernoulli Stochastics, the educational statistics systems is selected for the purpose of comparison and evaluation. This makes sense as both disciplines are an attempt to handle uncertainty and use methods that often can be directly compared. The second part of this dissertation is devoted to Bernoulli Stochastics. This part aims at outlining the content of two courses, which have been developed for the anticipated e-learning programme Stochastikon Magister in order to show the difficulties in teaching, understanding and applying Bernoulli Stochastics. The third part discusses the realization of the e-learning programme Stochastikon Magister, its design and implementation, which aims at offering a systematic learning of principles and techniques developed in Bernoulli Stochastics. The resulting e-learning programme differs from the commonly developed e-learning programmes as it is an attempt to provide a virtual classroom that simulates all the functions of real classroom teaching. This is in general not necessary, since most of the e-learning programmes aim at supporting existing classroom teaching. The forth part presents two empirical evaluations of Stochastikon Magister. The evaluations are performed by means of comparisons between traditional classroom learning in statistics and e-learning of Bernoulli Stochastics. The aim is to assess the usability and learnability of Stochastikon Magister. Finally, the fifth part of this dissertation is added as an appendix. It refers to Stochastikon Graphics, the fifth component of the entire Stochastikon system. Stochastikon Graphics provides the other components with graphical representations of concepts, procedures and results obtained or used in the framework of Bernoulli Stochastics. The primary aim of this thesis is the development of an appropriate software for the anticipated e-learning environment meant for Bernoulli Stochastics, while the preparation of the necessary teaching material constitutes only a secondary aim used for demonstrating the functionality of the e-learning platform and the scientific novelty of Bernoulli Stochastics. To this end, a first version of two teaching courses are developed, implemented and offered on-line in order to collect practical experiences. The two courses, which were developed as part of this projects are submitted as a supplement to this dissertation. For the time being the first experience with the e-learning programme Stochastikon Magister has been made. Students of different faculties of the University of W{\"u}rzburg, as well as researchers and engineers, who are involved in the Stochastikon project have obtained access to Stochastikon Magister via internet. They have registered for Stochastikon Magister and participated in the course programme. This thesis reports on two assessments of these first experiences and the results will lead to further improvements with respect to content and organization of Stochastikon Magister.}, subject = {Moment }, language = {en} } @phdthesis{Zeiger2010, author = {Zeiger, Florian}, title = {Internet Protocol based networking of mobile robots}, isbn = {978-3-923959-59-4}, doi = {10.25972/OPUS-4661}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-54776}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2010}, abstract = {This work is composed of three main parts: remote control of mobile systems via Internet, ad-hoc networks of mobile robots, and remote control of mobile robots via 3G telecommunication technologies. The first part gives a detailed state of the art and a discussion of the problems to be solved in order to teleoperate mobile robots via the Internet. The focus of the application to be realized is set on a distributed tele-laboratory with remote experiments on mobile robots which can be accessed world-wide via the Internet. Therefore, analyses of the communication link are used in order to realize a robust system. The developed and implemented architecture of this distributed tele-laboratory allows for a smooth access also with a variable or low link quality. The second part covers the application of ad-hoc networks for mobile robots. The networking of mobile robots via mobile ad-hoc networks is a very promising approach to realize integrated telematic systems without relying on preexisting communication infrastructure. Relevant civilian application scenarios are for example in the area of search and rescue operations where first responders are supported by multi-robot systems. Here, mobile robots, humans, and also existing stationary sensors can be connected very fast and efficient. Therefore, this work investigates and analyses the performance of different ad-hoc routing protocols for IEEE 802.11 based wireless networks in relevant scenarios. The analysis of the different protocols allows for an optimization of the parameter settings in order to use these ad-hoc routing protocols for mobile robot teleoperation. Also guidelines for the realization of such telematics systems are given. Also traffic shaping mechanisms of application layer are presented which allow for a more efficient use of the communication link. An additional application scenario, the integration of a small size helicopter into an IP based ad-hoc network, is presented. The teleoperation of mobile robots via 3G telecommunication technologies is addressed in the third part of this work. The high availability, high mobility, and the high bandwidth provide a very interesting opportunity to realize scenarios for the teleoperation of mobile robots or industrial remote maintenance. This work analyses important parameters of the UMTS communication link and investigates also the characteristics for different data streams. These analyses are used to give guidelines which are necessary for the realization of or industrial remote maintenance or mobile robot teleoperation scenarios. All the results and guidelines for the design of telematic systems in this work were derived from analyses and experiments with real hardware.}, subject = {Robotik}, language = {en} } @article{YuanBorrmannHouetal.2021, author = {Yuan, Yijun and Borrmann, Dorit and Hou, Jiawei and Ma, Yuexin and N{\"u}chter, Andreas and Schwertfeger, S{\"o}ren}, title = {Self-Supervised point set local descriptors for Point Cloud Registration}, series = {Sensors}, volume = {21}, journal = {Sensors}, number = {2}, issn = {1424-8220}, doi = {10.3390/s21020486}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-223000}, year = {2021}, abstract = {Descriptors play an important role in point cloud registration. The current state-of-the-art resorts to the high regression capability of deep learning. However, recent deep learning-based descriptors require different levels of annotation and selection of patches, which make the model hard to migrate to new scenarios. In this work, we learn local registration descriptors for point clouds in a self-supervised manner. In each iteration of the training, the input of the network is merely one unlabeled point cloud. Thus, the whole training requires no manual annotation and manual selection of patches. In addition, we propose to involve keypoint sampling into the pipeline, which further improves the performance of our model. Our experiments demonstrate the capability of our self-supervised local descriptor to achieve even better performance than the supervised model, while being easier to train and requiring no data labeling.}, language = {en} } @phdthesis{Xu2014, author = {Xu, Zhihao}, title = {Cooperative Formation Controller Design for Time-Delay and Optimality Problems}, isbn = {978-3-923959-96-9}, doi = {10.25972/OPUS-10555}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-105555}, school = {Universit{\"a}t W{\"u}rzburg}, year = {2014}, abstract = {This dissertation presents controller design methodologies for a formation of cooperative mobile robots to perform trajectory tracking and convoy protection tasks. Two major problems related to multi-agent formation control are addressed, namely the time-delay and optimality problems. For the task of trajectory tracking, a leader-follower based system structure is adopted for the controller design, where the selection criteria for controller parameters are derived through analyses of characteristic polynomials. The resulting parameters ensure the stability of the system and overcome the steady-state error as well as the oscillation behavior under time-delay effect. In the convoy protection scenario, a decentralized coordination strategy for balanced deployment of mobile robots is first proposed. Based on this coordination scheme, optimal controller parameters are generated in both centralized and decentralized fashion to achieve dynamic convoy protection in a unified framework, where distributed optimization technique is applied in the decentralized strategy. This unified framework takes into account the motion of the target to be protected, and the desired system performance, for instance, minimal energy to spend, equal inter-vehicle distance to keep, etc. Both trajectory tracking and convoy protection tasks are demonstrated through simulations and real-world hardware experiments based on the robotic equipment at Department of Computer Science VII, University of W{\"u}rzburg.}, subject = {Optimalwertregelung}, language = {en} } @article{WolffRutter2012, author = {Wolff, Alexander and Rutter, Iganz}, title = {Augmenting the Connectivity of Planar and Geometric Graphs}, series = {Journal of Graph Algorithms and Applications}, journal = {Journal of Graph Algorithms and Applications}, doi = {10.7155/jgaa.00275}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-97587}, year = {2012}, abstract = {In this paper we study connectivity augmentation problems. Given a connected graph G with some desirable property, we want to make G 2-vertex connected (or 2-edge connected) by adding edges such that the resulting graph keeps the property. The aim is to add as few edges as possible. The property that we consider is planarity, both in an abstract graph-theoretic and in a geometric setting, where vertices correspond to points in the plane and edges to straight-line segments. We show that it is NP-hard to � nd a minimum-cardinality augmentation that makes a planar graph 2-edge connected. For making a planar graph 2-vertex connected this was known. We further show that both problems are hard in the geometric setting, even when restricted to trees. The problems remain hard for higher degrees of connectivity. On the other hand we give polynomial-time algorithms for the special case of convex geometric graphs. We also study the following related problem. Given a planar (plane geometric) graph G, two vertices s and t of G, and an integer c, how many edges have to be added to G such that G is still planar (plane geometric) and contains c edge- (or vertex-) disjoint s{t paths? For the planar case we give a linear-time algorithm for c = 2. For the plane geometric case we give optimal worst-case bounds for c = 2; for c = 3 we characterize the cases that have a solution.}, language = {en} } @article{WolfDoellingerMaletal.2022, author = {Wolf, Erik and D{\"o}llinger, Nina and Mal, David and Wenninger, Stephan and Bartl, Andrea and Botsch, Mario and Latoschik, Marc Erich and Wienrich, Carolin}, title = {Does distance matter? Embodiment and perception of personalized avatars in relation to the self-observation distance in virtual reality}, series = {Frontiers in Virtual Reality}, volume = {3}, journal = {Frontiers in Virtual Reality}, issn = {2673-4192}, doi = {10.3389/frvir.2022.1031093}, url = {http://nbn-resolving.de/urn:nbn:de:bvb:20-opus-299415}, year = {2022}, abstract = {Virtual reality applications employing avatar embodiment typically use virtual mirrors to allow users to perceive their digital selves not only from a first-person but also from a holistic third-person perspective. However, due to distance-related biases such as the distance compression effect or a reduced relative rendering resolution, the self-observation distance (SOD) between the user and the virtual mirror might influence how users perceive their embodied avatar. Our article systematically investigates the effects of a short (1 m), middle (2.5 m), and far (4 m) SOD between users and mirror on the perception of their personalized and self-embodied avatars. The avatars were photorealistic reconstructed using state-of-the-art photogrammetric methods. Thirty participants repeatedly faced their real-time animated self-embodied avatars in each of the three SOD conditions, where they were repeatedly altered in their body weight, and participants rated the 1) sense of embodiment, 2) body weight perception, and 3) affective appraisal towards their avatar. We found that the different SODs are unlikely to influence any of our measures except for the perceived body weight estimation difficulty. Here, the participants perceived the difficulty significantly higher for the farthest SOD. We further found that the participants' self-esteem significantly impacted their ability to modify their avatar's body weight to their current body weight and that it positively correlated with the perceived attractiveness of the avatar. Additionally, the participants' concerns about their body shape affected how eerie they perceived their avatars. The participants' self-esteem and concerns about their body shape influenced the perceived body weight estimation difficulty. We conclude that the virtual mirror in embodiment scenarios can be freely placed and varied at a distance of one to four meters from the user without expecting major effects on the perception of the avatar.}, language = {en} }