006 Spezielle Computerverfahren
Refine
Document Type
- Journal article (10)
- Doctoral Thesis (4)
- Conference Proceeding (3)
Language
- English (17) (remove)
Keywords
- Virtual Reality (3)
- Virtuelle Realität (3)
- virtual reality (3)
- Augmented Reality (2)
- Extended Reality (2)
- Social VR (2)
- 3D model generation (1)
- Autonomer Roboter (1)
- Bildverarbeitung (1)
- Deep learning (1)
Institute
- Institut für Informatik (12)
- Institut Mensch - Computer - Medien (2)
- Institut für Psychologie (2)
- Graduate School of Life Sciences (1)
- Graduate School of Science and Technology (1)
- Institut für Geographie und Geologie (1)
- Institut für Philosophie (1)
- Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie (1)
- Pathologisches Institut (1)
Sonstige beteiligte Institutionen
The Wuertual Reality XR Meeting 2023 was initiated to bring together researchers from many fields who use VR/AR/XR. There was a focus on applied XR and social VR.
In this conference band, you can find the abstracts of the two keynotes, the 34 posters and poster pitches, the 29 talks and the four workshops.
The Wuertual Reality XR Meeting 2023 was initiated to bring together researchers from many fields who use VR/AR/XR. There was a focus on applied XR and social VR.
In this conference band, you can find the abstracts of the two keynotes, the 34 posters and poster pitches, the 29 talks and the four workshops.
Climate models are the tool of choice for scientists researching climate change. Like all models they suffer from errors, particularly systematic and location-specific representation errors. One way to reduce these errors is model output statistics (MOS) where the model output is fitted to observational data with machine learning. In this work, we assess the use of convolutional Deep Learning climate MOS approaches and present the ConvMOS architecture which is specifically designed based on the observation that there are systematic and location-specific errors in the precipitation estimates of climate models. We apply ConvMOS models to the simulated precipitation of the regional climate model REMO, showing that a combination of per-location model parameters for reducing location-specific errors and global model parameters for reducing systematic errors is indeed beneficial for MOS performance. We find that ConvMOS models can reduce errors considerably and perform significantly better than three commonly used MOS approaches and plain ResNet and U-Net models in most cases. Our results show that non-linear MOS models underestimate the number of extreme precipitation events, which we alleviate by training models specialized towards extreme precipitation events with the imbalanced regression method DenseLoss. While we consider climate MOS, we argue that aspects of ConvMOS may also be beneficial in other domains with geospatial data, such as air pollution modeling or weather forecasts.
Development, Simulation and Evaluation of Mobile Wireless Networks in Industrial Applications
(2023)
Manyindustrialautomationsolutionsusewirelesscommunicationandrelyontheavail-
ability and quality of the wireless channel. At the same time the wireless medium is
highly congested and guaranteeing the availability of wireless channels is becoming
increasingly difficult. In this work we show, that ad-hoc networking solutions can be
used to provide new communication channels and improve the performance of mobile
automation systems. These ad-hoc networking solutions describe different communi-
cation strategies, but avoid relying on network infrastructure by utilizing the Peer-to-
Peer (P2P) channel between communicating entities.
This work is a step towards the effective implementation of low-range communication
technologies(e.g. VisibleLightCommunication(VLC), radarcommunication, mmWave
communication) to the industrial application. Implementing infrastructure networks
with these technologies is unrealistic, since the low communication range would neces-
sitate a high number of Access Points (APs) to yield full coverage. However, ad-hoc
networks do not require any network infrastructure. In this work different ad-hoc net-
working solutions for the industrial use case are presented and tools and models for
their examination are proposed.
The main use case investigated in this work are Automated Guided Vehicles (AGVs)
for industrial applications. These mobile devices drive throughout the factory trans-
porting crates, goods or tools or assisting workers. In most implementations they must
exchange data with a Central Control Unit (CCU) and between one another. Predicting
if a certain communication technology is suitable for an application is very challenging
since the applications and the resulting requirements are very heterogeneous.
The proposed models and simulation tools enable the simulation of the complex inter-
action of mobile robotic clients and a wireless communication network. The goal is to
predict the characteristics of a networked AGV fleet.
Theproposedtoolswereusedtoimplement, testandexaminedifferentad-hocnetwork-
ing solutions for industrial applications using AGVs. These communication solutions
handle time-critical and delay-tolerant communication. Additionally a control method
for the AGVs is proposed, which optimizes the communication and in turn increases the
transport performance of the AGV fleet. Therefore, this work provides not only tools
for the further research of industrial ad-hoc system, but also first implementations of
ad-hoc systems which address many of the most pressing issues in industrial applica-
tions.
Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
Immersive, sensor-enabled technologies such as augmented and virtual reality expand the way human beings interact with computers significantly. While these technologies are widely explored in entertainment games, they also offer possibilities for educational use. However,their uptake in education is so far very limited. Within the ImTech4Ed project, we aim at systematically exploring the power of interdisciplinary, international hackathons as a novel method to create immersive educational game prototypes and as a means to transfer these innovative technical prototypes into educational use. To achieve this, we bring together game design and development, where immersive and interactive solutions are designed and developed; computer science, where the technological foundations for immersive technologies and for scalable architectures for these are created; and teacher education, where future teachers are educated. This article reports on the concept and design of these hackathons.
With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.
Machine-Learning-Based Identification of Tumor Entities, Tumor Subgroups, and Therapy Options
(2023)
Molecular genetic analyses, such as mutation analyses, are becoming increasingly important in the tumor field, especially in the context of therapy stratification. The identification of the underlying tumor entity is crucial, but can sometimes be difficult, for example in the case of metastases or the so-called Cancer of Unknown Primary (CUP) syndrome. In recent years, methylome and transcriptome utilizing machine learning (ML) approaches have been developed to enable fast and reliable tumor and tumor subtype identification. However, so far only methylome analysis have become widely used in routine diagnostics.
The present work addresses the utility of publicly available RNA-sequencing data to determine the underlying tumor entity, possible subgroups, and potential therapy options. Identification of these by ML - in particular random forest (RF) models - was the first task. The results with test accuracies of up to 99% provided new, previously unknown insights into the trained models and the corresponding entity prediction. Reducing the input data to the top 100 mRNA transcripts resulted in a minimal loss of prediction quality and could potentially enable application in clinical or real-world settings.
By introducing the ratios of these top 100 genes to each other as a new database for RF models, a novel method was developed enabling the use of trained RF models on data from other sources.
Further analysis of the transcriptomic differences of metastatic samples by visual clustering showed that there were no differences specific for the site of metastasis. Similarly, no distinct clusters were detectable when investigating primary tumors and metastases of cutaneous skin melanoma (SKCM).
Subsequently, more than half of the validation datasets had a prediction accuracy of at least 80%, with many datasets even achieving a prediction accuracy of – or close to – 100%.
To investigate the applicability of the used methods for subgroup identification, the TCGA-KIPAN dataset, consisting of the three major kidney cancer subgroups, was used. The results revealed a new, previously unknown subgroup consisting of all histopathological groups with clinically relevant characteristics, such as significantly different survival. Based on significant differences in gene expression, potential therapeutic options of the identified subgroup could be proposed.
Concludingly, in exploring the potential applicability of RNA-sequencing data as a basis for therapy prediction, it was shown that this type of data is suitable to predict entities as well as subgroups with high accuracy. Clinical relevance was also demonstrated for a novel subgroup in renal cell carcinoma. The reduction of the number of genes required for entity prediction to 100 genes, enables panel sequencing and thus demonstrates potential applicability in a real-life setting.
OCR4all—An open-source tool providing a (semi-)automatic OCR workflow for historical printings
(2019)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
In recent years, the applications and accessibility of Virtual Reality (VR) for the healthcare sector have continued to grow. However, so far, most VR applications are only relevant in research settings. Information about what healthcare professionals would need to independently integrate VR applications into their daily working routines is missing. The actual needs and concerns of the people who work in the healthcare sector are often disregarded in the development of VR applications, even though they are the ones who are supposed to use them in practice. By means of this study, we systematically involve health professionals in the development process of VR applications. In particular, we conducted an online survey with 102 healthcare professionals based on a video prototype which demonstrates a software platform that allows them to create and utilise VR experiences on their own. For this study, we adapted and extended the Technology Acceptance Model (TAM). The survey focused on the perceived usefulness and the ease of use of such a platform, as well as the attitude and ethical concerns the users might have. The results show a generally positive attitude toward such a software platform. The users can imagine various use cases in different health domains. However, the perceived usefulness is tied to the actual ease of use of the platform and sufficient support for learning and working with the platform. In the discussion, we explain how these results can be generalized to facilitate the integration of VR in healthcare practice.