006 Spezielle Computerverfahren
Refine
Document Type
- Journal article (11)
- Conference Proceeding (4)
- Doctoral Thesis (4)
- Master Thesis (1)
Keywords
- Virtual Reality (3)
- Virtuelle Realität (3)
- virtual reality (3)
- Augmented Reality (2)
- Digital Humanities (2)
- Extended Reality (2)
- Social VR (2)
- 3D model generation (1)
- Autonomer Roboter (1)
- Bildverarbeitung (1)
- Deep learning (1)
- Digitale Textanalyse (1)
- Domänenadaption (1)
- Drahtloses vermaschtes Netzwerk (1)
- Echtzeitsystem (1)
- Entscheidungsfindung (1)
- Erweiterte Realität <Informatik> (1)
- Ethik (1)
- Figurenerkennung (1)
- Figurenkonstellation (1)
- Figurennetzwerke (1)
- Framework <Informatik> (1)
- Funknetz (1)
- Image Processing (1)
- Industrie (1)
- Industrie-Roboter (1)
- Intelligent Realtime Interactive System (1)
- Intelligent Virtual Environment (1)
- Kabellose Netzwerke (1)
- Klima (1)
- Knowledge Representation Layer (1)
- Krebs <Medizin> (1)
- Künstliche Intelligenz (1)
- Literatur (1)
- Literaturwissenschaft (1)
- Maschinelles Lernen (1)
- Modell (1)
- Modul <Software> (1)
- Named-Entity-Recognition (1)
- Netzwerkanalyse <Soziologie> (1)
- Neuronales Netz (1)
- Ontologie <Wissensverarbeitung> (1)
- Optical Music Recognition (1)
- PMD (1)
- Prognose (1)
- RNA-Sequenzierung (1)
- Random Forest (1)
- Raumfahrt (1)
- Routing (1)
- Semantic Entity Model (1)
- Sensor (1)
- Sequenzdaten (1)
- Simulation (1)
- Software Engineering (1)
- Softwarewiederverwendung (1)
- Topic Modeling (1)
- Tumor (1)
- Vorhersage (1)
- Wissensrepräsentation (1)
- adaptive tutoring (1)
- artificial intelligence (1)
- authoring platform (1)
- avatar embodiment (1)
- avatars (1)
- background knowledge (1)
- body awareness (1)
- body image disturbance (1)
- body weight modification (1)
- body weight perception (1)
- classification (1)
- climate (1)
- complications (1)
- data fusion (1)
- document analysis (1)
- eating and body weight disorders (1)
- educational games (1)
- embodiment (1)
- ethics (1)
- explainable AI (1)
- explanation complexity (1)
- fully convolutional neural networks (1)
- gambling (1)
- hackathons (1)
- healthcare (1)
- healthcare professionals (1)
- higher education (1)
- historical document analysis (1)
- historical printings (1)
- human-centered AI (1)
- human-robot interaction (1)
- immersion (1)
- immersive technologies (1)
- individual differences (1)
- machine learning (1)
- medieval manuscripts (1)
- model output statistics (1)
- neume notation (1)
- neural networks (1)
- object reconstruction (1)
- optical character recognition (1)
- oro-antral communication (1)
- oro-antral fistula (1)
- phase unwrapping (1)
- pose estimation (1)
- prediction (1)
- recommender agent (1)
- rehabilitation (1)
- rendezvous and docking (1)
- risks (1)
- robot-supported training (1)
- robotic tutor (1)
- site mapping (1)
- structured light illumination (1)
- technology acceptance (1)
- technology-supported education (1)
- teeth extraction (1)
- therapy (1)
- underwater 3D scanning (1)
- user experience (1)
- virtual environments (1)
Institute
- Institut für Informatik (14)
- Institut Mensch - Computer - Medien (2)
- Institut für Psychologie (2)
- Institut für deutsche Philologie (2)
- Graduate School of Life Sciences (1)
- Graduate School of Science and Technology (1)
- Institut für Geographie und Geologie (1)
- Institut für Philosophie (1)
- Klinik und Poliklinik für Mund-, Kiefer- und Plastische Gesichtschirurgie (1)
- Pathologisches Institut (1)
Sonstige beteiligte Institutionen
In this work, a novel method for estimating the relative pose of a known object is presented, which relies on an application-specific data fusion process. A PMD-sensor in conjunction with a CCD-sensor is used to perform the pose estimation. Furthermore, the work provides a method for extending the measurement range of the PMD sensor along with the necessary calibration methodology. Finally, extensive measurements on a very accurate Rendezvous and Docking testbed are made to evaluate the performance, what includes a detailed discussion of lighting conditions.
Eine wichtige Grundlage für die quantitative Analyse von Erzähltexten, etwa eine Netzwerkanalyse der Figurenkonstellation, ist die automatische Erkennung von Referenzen auf Figuren in Erzähltexten, ein Sonderfall des generischen NLP-Problems der Named Entity Recognition. Bestehende, auf Zeitungstexten trainierte Modelle sind für literarische Texte nur eingeschränkt brauchbar, da die Einbeziehung von Appellativen in die Named Entity-Definition und deren häufige Verwendung in Romantexten zu einem schlechten Ergebnis führt. Dieses Paper stellt eine anhand eines manuell annotierten Korpus auf deutschsprachige Romane des 19. Jahrhunderts angepasste NER-Komponente vor.
Die vorliegende Arbeit lässt sich dem Bereich der quantitativen Literaturanalyse zuordnen und verfolgt das Ziel, mittels computergestützter Verfahren zu untersuchen, inwieweit sich Romane hinsichtlich ihrer Figurenkonstellation ähneln. Dazu wird die Figurenkonstellation, als wichtiges strukturgebendes Ordnungsprinzip eines Romans, als soziales Netzwerk der Figuren operationalisiert. Solche Netzwerke können unter Anwendung von Verfahren des Natural Language Processing automatisch aus dem Text erstellt werden.
Als Datengrundlage dient ein Korpus von deutschsprachigen Romanen aus dem 19. Jahrhundert, das mit automatischen Verfahren zur Figurenerkennung und Koreferenzauflösung prozessiert und manuell nachkorrigiert wurde, um eine möglichst saubere Datenbasis zu schaffen.
Ausgehend von der intensiven vergleichenden Betrachtung der Figurenkonstellationen von Fontanes "Effi Briest" und Flauberts "Madame Bovary" wurde in einer manuell erstellten Distanzmatrix die menschliche Intuition solcher Ähnlichkeit zwischen allen Romanen des Korpus festgehalten, basierend auf der Lektüre von Zusammenfassungen der Romane. Diese Daten werden als Evaluationsgrundlage genutzt.
Mit Hilfe von Methoden der sozialen Netzwerkanalyse können strukturelle Eigenschaften dieser Netzwerke als Features erhoben werden. Diese wurden anschließend zur Berechnung der Kosinusdistanz zwischen den Romanen verwendet.
Obwohl die automatisch erstellten Netzwerke die Figurenkonstellationen der Romane im Allgemeinen gut widerspiegeln und die Netzwerkfeatures sinnvoll interpretierbar sind, war die Korrelation mit der Evaluationsgrundlage niedrig. Dies legt die Vermutung nahe, dass neben der Struktur der Figurenkonstellation auch wiederkehrende Themen und Motive die Erstellung der Evaluationsgrundlage unterbewusst beeinflusst haben.
Daher wurde Topic Modeling angewendet, um wichtige zwischenmenschliche Motive zu modellieren, die für die Figurenkonstellation von Bedeutung sein können. Die Netzwerkfeatures und die Topic-Verteilung wurden in Kombination zur Distanzberechnung herangezogen. Außerdem wurde versucht, jeder Kante des Figurennetzwerks ein Topic zuzuordnen, das diese Kante inhaltlich beschreibt. Hier zeigte sich, dass einerseits Topics, die sehr spezifisch für bestimmte Texte sind, und andererseits Topics, die über alle Texte hinweg stark vertreten sind, das Ergebnis bestimmen, sodass wiederum keine, bzw. nur eine sehr schwache Korrelation mit der Evaluationsgrundlage gefunden werden konnte.
Der Umstand, dass keine Verbindung zwischen den berechneten Distanzen und der Evaluationsgrundlage gefunden werden konnte, obwohl die einzelnen Features sinnvoll interpretierbar sind, lässt Zweifel an der Evaluationsmatrix aufkommen. Diese scheint stärker als zu Beginn angenommen unterbewusst von thematischen und motivischen Ähnlichkeiten zwischen den Romanen beeinflusst zu sein. Auch die Qualität der jeweiligen Zusammenfassung hat hier einen nicht unwesentlichen Einfluss. Daher wäre eine weniger subjektiv geprägte Möglichkeit der Auswertung von Nöten, beispielsweise durch die parallele Einschätzung mehrerer Annotatoren. Auch die weitere Verbesserung von NLP-Verfahren für literarische Texte in deutscher Sprache ist ein Desideratum für anknüpfende Forschungsansätze.
Software frameworks for Realtime Interactive Systems (RIS), e.g., in the areas of Virtual, Augmented, and Mixed Reality (VR, AR, and MR) or computer games, facilitate a multitude of functionalities by coupling diverse software modules. In this context, no uniform methodology for coupling these modules does exist; instead various purpose-built solutions have been proposed. As a consequence, important software qualities, such as maintainability, reusability, and adaptability, are impeded.
Many modern systems provide additional support for the integration of Artificial Intelligence (AI) methods to create so called intelligent virtual environments. These methods exacerbate the above-mentioned problem of coupling software modules in the thus created Intelligent Realtime Interactive Systems (IRIS) even more. This, on the one hand, is due to the commonly applied specialized data structures and asynchronous execution schemes, and the requirement for high consistency regarding content-wise coupled but functionally decoupled forms of data representation on the other.
This work proposes an approach to decoupling software modules in IRIS, which is based on the abstraction of architecture elements using a semantic Knowledge Representation Layer (KRL). The layer facilitates decoupling the required modules, provides a means for ensuring interface compatibility and consistency, and in the end constitutes an interface for symbolic AI methods.
OCR4all—An open-source tool providing a (semi-)automatic OCR workflow for historical printings
(2019)
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
Immersive, sensor-enabled technologies such as augmented and virtual reality expand the way human beings interact with computers significantly. While these technologies are widely explored in entertainment games, they also offer possibilities for educational use. However,their uptake in education is so far very limited. Within the ImTech4Ed project, we aim at systematically exploring the power of interdisciplinary, international hackathons as a novel method to create immersive educational game prototypes and as a means to transfer these innovative technical prototypes into educational use. To achieve this, we bring together game design and development, where immersive and interactive solutions are designed and developed; computer science, where the technological foundations for immersive technologies and for scalable architectures for these are created; and teacher education, where future teachers are educated. This article reports on the concept and design of these hackathons.
Die künstliche Intelligenz (KI) entwickelt sich rasant und hat bereits eindrucksvolle Erfolge zu verzeichnen, darunter übermenschliche Kompetenz in den meisten Spielen und vielen Quizshows, intelligente Suchmaschinen, individualisierte Werbung, Spracherkennung, -ausgabe und -übersetzung auf sehr hohem Niveau und hervorragende Leistungen bei der Bildverarbeitung, u. a. in der Medizin, der optischen Zeichenerkennung, beim autonomen Fahren, aber auch beim Erkennen von Menschen auf Bildern und Videos oder bei Deep Fakes für Fotos und Videos. Es ist zu erwarten, dass die KI auch in der Entscheidungsfindung Menschen übertreffen wird; ein alter Traum der Expertensysteme, der durch Lernverfahren, Big Data und Zugang zu dem gesammelten Wissen im Web in greifbare Nähe rückt. Gegenstand dieses Beitrags sind aber weniger die technischen Entwicklungen, sondern mögliche gesellschaftliche Auswirkungen einer spezialisierten, kompetenten KI für verschiedene Bereiche der autonomen, d. h. nicht nur unterstützenden Entscheidungsfindung: als Fußballschiedsrichter, in der Medizin, für richterliche Entscheidungen und sehr spekulativ auch im politischen Bereich. Dabei werden Vor- und Nachteile dieser Szenarien aus gesellschaftlicher Sicht diskutiert.
With the increasing adaptability and complexity of advisory artificial intelligence (AI)-based agents, the topics of explainable AI and human-centered AI are moving close together. Variations in the explanation itself have been widely studied, with some contradictory results. These could be due to users’ individual differences, which have rarely been systematically studied regarding their inhibiting or enabling effect on the fulfillment of explanation objectives (such as trust, understanding, or workload). This paper aims to shed light on the significance of human dimensions (gender, age, trust disposition, need for cognition, affinity for technology, self-efficacy, attitudes, and mind attribution) as well as their interplay with different explanation modes (no, simple, or complex explanation). Participants played the game Deal or No Deal while interacting with an AI-based agent. The agent gave advice to the participants on whether they should accept or reject the deals offered to them. As expected, giving an explanation had a positive influence on the explanation objectives. However, the users’ individual characteristics particularly reinforced the fulfillment of the objectives. The strongest predictor of objective fulfillment was the degree of attribution of human characteristics. The more human characteristics were attributed, the more trust was placed in the agent, advice was more likely to be accepted and understood, and important needs were satisfied during the interaction. Thus, the current work contributes to a better understanding of the design of explanations of an AI-based agent system that takes into account individual characteristics and meets the demand for both explainable and human-centered agent systems.
Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
In recent years, the applications and accessibility of Virtual Reality (VR) for the healthcare sector have continued to grow. However, so far, most VR applications are only relevant in research settings. Information about what healthcare professionals would need to independently integrate VR applications into their daily working routines is missing. The actual needs and concerns of the people who work in the healthcare sector are often disregarded in the development of VR applications, even though they are the ones who are supposed to use them in practice. By means of this study, we systematically involve health professionals in the development process of VR applications. In particular, we conducted an online survey with 102 healthcare professionals based on a video prototype which demonstrates a software platform that allows them to create and utilise VR experiences on their own. For this study, we adapted and extended the Technology Acceptance Model (TAM). The survey focused on the perceived usefulness and the ease of use of such a platform, as well as the attitude and ethical concerns the users might have. The results show a generally positive attitude toward such a software platform. The users can imagine various use cases in different health domains. However, the perceived usefulness is tied to the actual ease of use of the platform and sufficient support for learning and working with the platform. In the discussion, we explain how these results can be generalized to facilitate the integration of VR in healthcare practice.