17442
2019
eng
xv, 132
1. Auflage
doctoralthesis
Würzburg University Press
Würzburg
1
2018-12-20
--
2017-12-21
An Optimization-Based Approach for Continuous Map Generalization
Optimierung für die kontinuierliche Generalisierung von Landkarten
Maps are the main tool to represent geographical information. Geographical information is usually scale-dependent, so users need to have access to maps at different scales. In our digital age, the access is realized by zooming. As discrete changes during the zooming tend to distract users, smooth changes are preferred. This is why some digital maps are trying to make the zooming as continuous as they can. The process of producing maps at different scales with smooth changes is called continuous map generalization.
In order to produce maps of high quality, cartographers often take into account additional requirements. These requirements are transferred to models in map generalization. Optimization for map generalization is important not only because it finds optimal solutions in the sense of the models, but also because it helps us to evaluate the quality of the models. Optimization, however, becomes more delicate when we deal with continuous map generalization. In this area, there are requirements not only for a specific map but also for relations between maps at difference scales. This thesis is about continuous map generalization based on optimization.
First, we show the background of our research topics. Second, we find optimal sequences for aggregating land-cover areas. We compare the A$^{\!\star}$\xspace algorithm and integer linear programming in completing this task. Third, we continuously generalize county boundaries to provincial boundaries based on compatible triangulations. We morph between the two sets of boundaries, using dynamic programming to compute the correspondence. Fourth, we continuously generalize buildings to built-up areas by aggregating and growing. In this work, we group buildings with the help of a minimum spanning tree. Fifth, we define vertex trajectories that allow us to morph between polylines. We require that both the angles and the edge lengths change linearly over time. As it is impossible to fulfill all of these requirements simultaneously, we mediate between them using least-squares adjustment. Sixth, we discuss the performance of some commonly used data structures for a specific spatial problem. Seventh, we conclude this thesis and present open problems.
Maps are the main tool to represent geographical information. Users often zoom in and out to access maps at different scales. Continuous map generalization tries to make the changes between different scales smooth, which is essential to provide users with comfortable zooming experience.
In order to achieve continuous map generalization with high quality, we optimize some important aspects of maps. In this book, we have used optimization in the generalization of land-cover areas, administrative boundaries, buildings, and coastlines. According to our experiments, continuous map generalization indeed benefits from optimization.
Landkarten sind das wichtigste Werkzeug zur Repräsentation geografischer Information. Unter der Generalisierung von Landkarten versteht man die Aufbereitung von geografischen Informationen aus detaillierten Daten zur Generierung von kleinmaßstäbigen Karten. Nutzer von Online-Karten zoomen oft in eine Karte hinein oder aus einer Karte heraus, um mehr Details bzw. mehr Überblick zu bekommen. Die kontinuierliche Generalisierung von Landkarten versucht die Änderungen zwischen verschiedenen Maßstäben stetig zu machen. Dies ist wichtig, um Nutzern eine angenehme Zoom-Erfahrung zu bieten.
Um eine qualitativ hochwertige kontinuierliche Generalisierung zu erreichen, kann man wichtige Aspekte bei der Generierung von Online-Karten optimieren. In diesem Buch haben wir Optimierung bei der Generalisierung von Landnutzungskarten, von administrativen Grenzen, Gebäuden und Küstenlinien eingesetzt. Unsere Experimente zeigen, dass die kontinuierliche Generalisierung von Landkarten in der Tat von Optimierung profitiert.
978-3-95826-104-4
978-3-95826-105-1
urn:nbn:de:bvb:20-opus-174427
10.25972/WUP-978-3-95826-105-1
Parallel erschienen als Druckausgabe in Würzburg University Press, 978-3-95826-104-4, 24,90 EUR.
CC BY-SA: Creative-Commons-Lizenz: Namensnennung, Weitergabe unter gleichen Bedingungen 4.0 International
Dongliang Peng
eng
uncontrolled
land-cover area
eng
uncontrolled
administrative boundary
eng
uncontrolled
building
eng
uncontrolled
morphing
eng
uncontrolled
data structure
eng
uncontrolled
zooming
deu
swd
Generalisierung <Kartografie>
deu
swd
Landnutzungskartierung
deu
swd
Optimierung
Datenverarbeitung; Informatik
COMPUTER-AIDED ENGINEERING
open_access
Institut für Informatik
Monographien (Books)
Würzburg University Press
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/17442/978-3-95826-105-1_Peng_Dongliang_OPUS_17442.pdf
18394
2019
eng
24
preprint
1
--
--
--
Biological heuristics applied to cosmology suggests a condensation nucleus as start of our universe and inflation cosmology replaced by a period of rapid Weiss domain-like crystal growth
Cosmology often uses intricate formulas and mathematics to derive new theories and concepts. We do something different in this paper: We look at biological processes and derive from these heuristics so that the revised cosmology agrees with astronomical observations but does also agree with standard biological observations. We show that we then have to replace any type of singularity at the start of the universe by a condensation nucleus and that the very early period of the universe usually assumed to be inflation has to be replaced by a period of rapid crystal growth as in Weiss magnetization domains.
Impressively, these minor modifications agree well with astronomical observations including removing the strong inflation perturbations which were never observed in the recent BICEP2 experiments. Furthermore, looking at biological principles suggests that such a new theory with a condensation nucleus at start and a first rapid phase of magnetization-like growth of the ordered, physical laws obeying lattice we live in is in fact the only convincing theory of the early phases of our universe that also is compatible with current observations.
We show in detail in the following that such a process of crystal creation, breaking of new crystal seeds and ultimate evaporation of the present crystal readily leads over several generations to an evolution and selection of better, more stable and more self-organizing crystals. Moreover, this explains the “fine-tuning” question why our universe is fine-tuned to favor life: Our Universe is so self-organizing to have enough offspring and the detailed physics involved is at the same time highly favorable for all self-organizing processes including life.
This biological theory contrasts with current standard inflation cosmologies. The latter do not perform well in explaining any phenomena of sophisticated structure creation or self-organization. As proteins can only thermodynamically fold by increasing the entropy in the solution around them we suggest for cosmology a condensation nucleus for a universe can form only in a “chaotic ocean” of string-soup or quantum foam if the entropy outside of the nucleus rapidly increases. We derive an interaction potential for 1 to n-dimensional strings or quantum-foams and show that they allow only 1D, 2D, 4D or octonion interactions. The latter is the richest structure and agrees to the E8 symmetry fundamental to particle physics and also compatible with the ten dimensional string theory E8 which is part of the M-theory. Interestingly, any other interactions of other dimensionality can be ruled out using Hurwitz compositional theorem. Crystallization explains also extremely well why we have only one macroscopic reality and where the worldlines of alternative trajectories exist: They are in other planes of the crystal and for energy reasons they crystallize mostly at the same time, yielding a beautiful and stable crystal. This explains decoherence and allows to determine the size of Planck´s quantum h (very small as separation of crystal layers by energy is extremely strong).
Ultimate dissolution of real crystals suggests an explanation for dark energy agreeing with estimates for the “big rip”. The halo distribution of dark matter favoring galaxy formation is readily explained by a crystal seed starting with unit cells made of normal and dark matter.
That we have only matter and not antimatter can be explained as there may be right handed mattercrystals and left-handed antimatter crystals. Similarly, real crystals are never perfect and we argue that exactly such irregularities allow formation of galaxies, clusters and superclusters. Finally, heuristics from genetics suggest to look for a systems perspective to derive correct vacuum and Higgs Boson energies.
urn:nbn:de:bvb:20-opus-183945
false
true
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Thomas Dandekar
eng
uncontrolled
heuristics
eng
uncontrolled
inflation
eng
uncontrolled
cosmology
eng
uncontrolled
crystallization
eng
uncontrolled
crystal growth
eng
uncontrolled
E8 symmetry
eng
uncontrolled
Hurwitz theorem
eng
uncontrolled
evolution
eng
uncontrolled
Lee Smolin
Datenverarbeitung; Informatik
Biowissenschaften; Biologie
open_access
Theodor-Boveri-Institut für Biowissenschaften
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/18394/Dandekar_Preprint_Crystal_2019.pdf
20227
2019
eng
100261
10
article
1
2020-03-25
--
--
DDRS4PALS: a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board
Lifetime techniques are applied to diverse fields of study including materials sciences, semiconductor physics, biology, molecular biophysics and photochemistry.
Here we present DDRS4PALS, a software for the acquisition and simulation of lifetime spectra using the DRS4 evaluation board (Paul Scherrer Institute, Switzerland) for time resolved measurements and digitization of detector output pulses. Artifact afflicted pulses can be corrected or rejected prior to the lifetime calculation to provide the generation of high-quality lifetime spectra, which are crucial for a profound analysis, i.e. the decomposition of the true information. Moreover, the pulses can be streamed on an (external) hard drive during the measurement and subsequently downloaded in the offline mode without being connected to the hardware. This allows the generation of various lifetime spectra at different configurations from one single measurement and, hence, a meaningful comparison in terms of analyzability and quality. Parallel processing and an integrated JavaScript based language provide convenient options to accelerate and automate time consuming processes such as lifetime spectra simulations.
SoftwareX
10.1016/j.softx.2019.100261
urn:nbn:de:bvb:20-opus-202276
SoftwareX (2019) 10:100261. https://doi.org/10.1016/j.softx.2019.100261
false
true
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Danny Petschke
Torsten E.M. Staab
eng
uncontrolled
Lifetime spectroscopy
eng
uncontrolled
Positron annihilation spectroscopy
eng
uncontrolled
Simulation
eng
uncontrolled
Time resolved measurements
Datenverarbeitung; Informatik
open_access
Institut für Funktionsmaterialien und Biofabrikation
Förderzeitraum 2019
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/20227/Petschke_SoftwareX_2019.pdf
19337
2019
deu
447
452
3
43
article
1
--
2019-12-03
--
Erkennung von handschriftlichen Unterstreichungen in Alten Drucken
Die Erkennung handschriftlicher Artefakte wie Unterstreichungen in Buchdrucken ermöglicht Rückschlüsse auf das Rezeptionsverhalten und die Provenienzgeschichte und wird auch für eine OCR benötigt. Dabei soll zwischen handschriftlichen Unterstreichungen und waagerechten Linien im Druck (z. B. Trennlinien usw.) unterschieden werden, da letztere nicht ausgezeichnet werden sollen. Im Beitrag wird ein Ansatz basierend auf einem auf Unterstreichungen trainierten Neuronalen Netz gemäß der U-Net Architektur vorgestellt, dessen Ergebnisse in einem zweiten Schritt mit heuristischen Regeln nachbearbeitet werden. Die Evaluationen zeigen, dass Unterstreichungen sehr gut erkannt werden, wenn bei der Binarisierung der Scans nicht zu viele Pixel der Unterstreichung wegen geringem Kontrast verloren gehen. Zukünftig sollen die Worte oberhalb der Unterstreichung mit OCR transkribiert werden und auch andere Artefakte wie handschriftliche Notizen in alten Drucken erkannt werden.
The recognition of handwritten artefacts like underlines in historical printings allows inference on the reception and provenance history and is necessary for OCR (optical character recognition). In this context it is important to differentiate between handwritten and printed lines, since the latter are common in printings, but should be ignored. We present an approach based on neural nets with the U-Net architecture, whose segmentation results are post processed with heuristic rules. The evaluations show that handwritten underlines are very well recognized if the binarisation of the scans is adequate. Future work includes transcription of the underlined words with OCR and recognition of other artefacts like handwritten notes in historical printings.
Bibliothek Forschung und Praxis
Recognition of handwritten underlines in historical printings
1865-7648
0341-4183
10.1515/bfp-2019-2083
urn:nbn:de:bvb:20-opus-193377
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
swordwue
2019-12-18T18:36:12+00:00
attachment; filename=deposit.zip
e2dd3473769150c7a1fc0a647e5e2d9f
Bibliothek Forschung und Praxis (2019) 43:3, 447–452. https://doi.org/10.1515/bfp-2019-2083
true
true
Deutsches Urheberrecht
Alexander Gehrke
Nico Balbach
Yong-Mi Rauch
Andreas Degkwitz
Frank Puppe
deu
uncontrolled
Brüder Grimm Privatbibliothek
deu
uncontrolled
Erkennung handschriftlicher Artefakte
deu
uncontrolled
Convolutional Neural Network
deu
uncontrolled
regelbasierte Nachbearbeitung
eng
uncontrolled
Grimm brothers personal library
eng
uncontrolled
handwritten artefact recognition
eng
uncontrolled
convolutional neural network
eng
uncontrolled
rule based post processing
Datenverarbeitung; Informatik
Bibliotheks- und Informationswissenschaften
open_access
Institut für Informatik
Import
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/19337/bfp-2019-2083.pdf
19723
2019
eng
999
7
8
article
1
--
2019-07-09
--
Exploration of artificial intelligence use with ARIES in multiple myeloma research
Background: Natural language processing (NLP) is a powerful tool supporting the generation of Real-World Evidence (RWE). There is no NLP system that enables the extensive querying of parameters specific to multiple myeloma (MM) out of unstructured medical reports. We therefore created a MM-specific ontology to accelerate the information extraction (IE) out of unstructured text. Methods: Our MM ontology consists of extensive MM-specific and hierarchically structured attributes and values. We implemented “A Rule-based Information Extraction System” (ARIES) that uses this ontology. We evaluated ARIES on 200 randomly selected medical reports of patients diagnosed with MM. Results: Our system achieved a high F1-Score of 0.92 on the evaluation dataset with a precision of 0.87 and recall of 0.98. Conclusions: Our rule-based IE system enables the comprehensive querying of medical reports. The IE accelerates the extraction of data and enables clinicians to faster generate RWE on hematological issues. RWE helps clinicians to make decisions in an evidence-based manner. Our tool easily accelerates the integration of research evidence into everyday clinical practice.
Journal of Clinical Medicine
2077-0383
10.3390/jcm8070999
urn:nbn:de:bvb:20-opus-197231
Journal of Clinical Medicine 2019, 8(7), 999; https://doi.org/10.3390/jcm8070999
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Sophia Loda
Jonathan Krebs
Sophia Danhof
Martin Schreder
Antonio G. Solimando
Susanne Strifler
Leo Rasche
Martin Kortüm
Alexander Kerscher
Stefan Knop
Frank Puppe
Hermann Einsele
Max Bittrich
eng
uncontrolled
natural language processing
eng
uncontrolled
ontology
eng
uncontrolled
artificial intelligence
eng
uncontrolled
multiple myeloma
eng
uncontrolled
real world evidence
Datenverarbeitung; Informatik
Medizin und Gesundheit
open_access
Institut für Informatik
Medizinische Klinik und Poliklinik II
Import
Förderzeitraum 2019
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/19723/jcm-08-00999-v3.pdf
17866
2019
eng
doctoralthesis
1
2019-03-28
--
2019-03-26
Extracting and Learning Semantics from Social Web Data
Extraktion und Lernen von Semantik aus Social Web-Daten
Making machines understand natural language is a dream of mankind that existed
since a very long time. Early attempts at programming machines to converse with
humans in a supposedly intelligent way with humans relied on phrase lists and simple
keyword matching. However, such approaches cannot provide semantically adequate
answers, as they do not consider the specific meaning of the conversation. Thus, if we
want to enable machines to actually understand language, we need to be able to access
semantically relevant background knowledge. For this, it is possible to query so-called
ontologies, which are large networks containing knowledge about real-world entities
and their semantic relations. However, creating such ontologies is a tedious task, as often
extensive expert knowledge is required. Thus, we need to find ways to automatically
construct and update ontologies that fit human intuition of semantics and semantic
relations. More specifically, we need to determine semantic entities and find relations
between them. While this is usually done on large corpora of unstructured text, previous
work has shown that we can at least facilitate the first issue of extracting entities by
considering special data such as tagging data or human navigational paths. Here, we do
not need to detect the actual semantic entities, as they are already provided because of
the way those data are collected. Thus we can mainly focus on the problem of assessing
the degree of semantic relatedness between tags or web pages. However, there exist
several issues which need to be overcome, if we want to approximate human intuition of
semantic relatedness. For this, it is necessary to represent words and concepts in a way
that allows easy and highly precise semantic characterization. This also largely depends
on the quality of data from which these representations are constructed.
In this thesis, we extract semantic information from both tagging data created by users
of social tagging systems and human navigation data in different semantic-driven social
web systems. Our main goal is to construct high quality and robust vector representations
of words which can the be used to measure the relatedness of semantic concepts.
First, we show that navigation in the social media systems Wikipedia and BibSonomy is
driven by a semantic component. After this, we discuss and extend methods to model
the semantic information in tagging data as low-dimensional vectors. Furthermore, we
show that tagging pragmatics influences different facets of tagging semantics. We then
investigate the usefulness of human navigational paths in several different settings on
Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose
a metric-learning based algorithm in adapt pre-trained word embeddings to datasets
containing human judgment of semantic relatedness.
This work contributes to the field of studying semantic relatedness between words
by proposing methods to extract semantic relatedness from web navigation, learn highquality
and low-dimensional word representations from tagging data, and to learn
semantic relatedness from any kind of vector representation by exploiting human
feedback. Applications first and foremest lie in ontology learning for the Semantic Web,
but also semantic search or query expansion.
Einer der großen Träume der Menschheit ist es, Maschinen dazu zu bringen, natürliche
Sprache zu verstehen. Frühe Versuche, Computer dahingehend zu programmieren, dass
sie mit Menschen vermeintlich intelligente Konversationen führen können, basierten
hauptsächlich auf Phrasensammlungen und einfachen Stichwortabgleichen. Solche
Ansätze sind allerdings nicht in der Lage, inhaltlich adäquate Antworten zu liefern, da
der tatsächliche Inhalt der Konversation nicht erfasst werden kann. Folgerichtig ist es
notwendig, dass Maschinen auf semantisch relevantes Hintergrundwissen zugreifen
können, um diesen Inhalt zu verstehen. Solches Wissen ist beispielsweise in Ontologien
vorhanden. Ontologien sind große Datenbanken von vernetztem Wissen über Objekte
und Gegenstände der echten Welt sowie über deren semantische Beziehungen. Das
Erstellen solcher Ontologien ist eine sehr kostspielige und aufwändige Aufgabe, da oft
tiefgreifendes Expertenwissen benötigt wird. Wir müssen also Wege finden, um Ontologien
automatisch zu erstellen und aktuell zu halten, und zwar in einer Art und Weise,
dass dies auch menschlichem Empfinden von Semantik und semantischer Ähnlichkeit
entspricht. Genauer gesagt ist es notwendig, semantische Entitäten und deren Beziehungen
zu bestimmen. Während solches Wissen üblicherweise aus Textkorpora extrahiert
wird, ist es möglich, zumindest das erste Problem - semantische Entitäten zu bestimmen
- durch Benutzung spezieller Datensätze zu umgehen, wie zum Beispiel Tagging- oder
Navigationsdaten. In diesen Arten von Datensätzen ist es nicht notwendig, Entitäten
zu extrahieren, da sie bereits aufgrund inhärenter Eigenschaften bei der Datenakquise
vorhanden sind. Wir können uns also hauptsächlich auf die Bestimmung von semantischen
Relationen und deren Intensität fokussieren. Trotzdem müssen hier noch einige
Hindernisse überwunden werden. Beispielsweise ist es notwendig, Repräsentationen
für semantische Entitäten zu finden, so dass es möglich ist, sie einfach und semantisch
hochpräzise zu charakterisieren. Dies hängt allerdings auch erheblich von der Qualität
der Daten ab, aus denen diese Repräsentationen konstruiert werden.
In der vorliegenden Arbeit extrahieren wir semantische Informationen sowohl aus
Taggingdaten, von Benutzern sozialer Taggingsysteme erzeugt, als auch aus Navigationsdaten
von Benutzern semantikgetriebener Social Media-Systeme. Das Hauptziel
dieser Arbeit ist es, hochqualitative und robuste Vektordarstellungen von Worten zu
konstruieren, die dann dazu benutzt werden können, die semantische Ähnlichkeit
von Konzepten zu bestimmen. Als erstes zeigen wir, dass Navigation in Social Media Systemen
unter anderem durch eine semantische Komponente getrieben wird. Danach
diskutieren und erweitern wir Methoden, um die semantische Information in Taggingdaten
als niedrigdimensionale sogenannte “Embeddings” darzustellen. Darüberhinaus
demonstrieren wir, dass die Taggingpragmatik verschiedene Facetten der Taggingsemantik
beeinflusst. Anschließend untersuchen wir, inwieweit wir menschliche Navigationspfade
zur Bestimmung semantischer Ähnlichkeit benutzen können. Hierzu betrachten
wir mehrere Datensätze, die Navigationsdaten in verschiedenen Rahmenbedingungen
beinhalten. Als letztes stellen wir einen neuartigen Algorithmus vor, um bereits
trainierte Word Embeddings im Nachhinein an menschliche Intuition von Semantik
anzupassen.
Diese Arbeit steuert wertvolle Beiträge zum Gebiet der Bestimmung von semantischer
Ähnlichkeit bei: Es werden Methoden vorgestellt werden, um hochqualitative semantische
Information aus Web-Navigation und Taggingdaten zu extrahieren, diese mittels
niedrigdimensionaler Vektordarstellungen zu modellieren und selbige schließlich besser
an menschliches Empfinden von semantischer Ähnlichkeit anzupassen, indem aus
genau diesem Empfinden gelernt wird. Anwendungen liegen in erster Linie darin,
Ontologien für das Semantic Web zu lernen, allerdings auch in allen Bereichen, die
Vektordarstellungen von semantischen Entitäten benutzen.
urn:nbn:de:bvb:20-opus-178666
10.25972/OPUS-17866
X 128100
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Thomas Niebler
deu
swd
Semantik
deu
swd
Maschinelles Lernen
deu
swd
Soziale Software
eng
uncontrolled
Semantics
eng
uncontrolled
User Behavior
eng
uncontrolled
Social Web
eng
uncontrolled
Machine Learning
Datenverarbeitung; Informatik
Knowledge acquisition
open_access
Institut für Informatik
Universität Würzburg
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/17866/thomas_niebler_extracting.pdf
20249
2019
eng
100027
3
article
1
2020-04-01
--
--
Improving engineering models of terramechanics for planetary exploration
This short letter proposes more consolidated explicit solutions for the forces and torques acting on typical rover wheels, that can be used as a method to determine their average mobility characteristics in planetary soils. The closed loop solutions stand in one of the verified methods, but at difference of the previous, observables are decoupled requiring a less amount of physical parameters to measure. As a result, we show that with knowledge of terrain properties, wheel driving performance rely in a single observable only. Because of their generality, the formulated equations established here can have further implications in autonomy and control of rovers or planetary soil characterization.
Results in Engineering
10.1016/j.rineng.2019.100027
urn:nbn:de:bvb:20-opus-202490
Results in Engineering (2019) 3:100027. https://doi.org/10.1016/j.rineng.2019.100027
false
true
CC BY-NC-ND: Creative-Commons-Lizenz: Namensnennung, Nicht kommerziell, Keine Bearbeitungen 4.0 International
A. J. R. Lopez-Arreguin
S. Montenegro
eng
uncontrolled
Wheel
eng
uncontrolled
Terramechanics
eng
uncontrolled
Forces
eng
uncontrolled
Torque
eng
uncontrolled
Robotics
Datenverarbeitung; Informatik
open_access
Institut für Informatik
Förderzeitraum 2019
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/20249/Lopez-Arreguin_ResultsInEngineering_2019.pdf
18826
2019
deu
76
1. Auflage
annualreport
Rechenzentrum (Universität Würzburg)
1
2019-10-04
--
--
Jahresbericht 2018 des Rechenzentrums der Universität Würzburg
Annual Report 2018 of the Computer Center, University of Wuerzburg
Eine Übersicht über die Aktivitäten des Rechenzentrums im Jahr 2018.
https://www.rz.uni-wuerzburg.de/wir/publikationen/
urn:nbn:de:bvb:20-opus-188265
false
true
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Matthias Funken
Michael Tscherner
Jahresbericht des Rechenzentrums der Universität Würzburg
2018
deu
swd
Julius-Maximilians-Universität Würzburg
deu
swd
Jahresbericht
deu
uncontrolled
Jahresbericht
deu
swd
Rechenzentrum
deu
uncontrolled
RZUW
eng
uncontrolled
annual report
eng
uncontrolled
Computer Center University of Wuerzburg
Datenverarbeitung; Informatik
General Literature
open_access
Rechenzentrum
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/18826/Jahresbericht_RZ_2018.pdf
20115
2019
eng
7626349
2019
article
1
2020-03-11
--
--
Knowledge encoding in game mechanics: transfer-oriented knowledge learning in desktop-3D and VR
Affine Transformations (ATs) are a complex and abstract learning content. Encoding the AT knowledge in Game Mechanics (GMs) achieves a repetitive knowledge application and audiovisual demonstration. Playing a serious game providing these GMs leads to motivating and effective knowledge learning. Using immersive Virtual Reality (VR) has the potential to even further increase the serious game’s learning outcome and learning quality. This paper compares the effectiveness and efficiency of desktop-3D and VR in respect to the achieved learning outcome. Also, the present study analyzes the effectiveness of an enhanced audiovisual knowledge encoding and the provision of a debriefing system. The results validate the effectiveness of the knowledge encoding in GMs to achieve knowledge learning. The study also indicates that VR is beneficial for the overall learning quality and that an enhanced audiovisual encoding has only a limited effect on the learning outcome.
International Journal of Computer Games Technology
10.1155/2019/7626349
urn:nbn:de:bvb:20-opus-201159
International Journal of Computer Games Technology (2019) 2019:7626349. https://doi.org/10.1155/2019/7626349
false
true
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Sebastian Oberdörfer
Marc Erich Latoschik
eng
uncontrolled
games
Datenverarbeitung; Informatik
open_access
Institut für Informatik
Förderzeitraum 2019
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/20115/Oberdoerfer_InternationalJournalOfComputerGamesTechnology_2019.pdf.pdf
19883
2019
eng
105
10
6
article
1
--
2019-09-24
--
Model-based fault detection and diagnosis for spacecraft with an application for the SONATE triple cube nano-satellite
The correct behavior of spacecraft components is the foundation of unhindered mission operation. However, no technical system is free of wear and degradation. A malfunction of one single component might significantly alter the behavior of the whole spacecraft and may even lead to a complete mission failure. Therefore, abnormal component behavior must be detected early in order to be able to perform counter measures. A dedicated fault detection system can be employed, as opposed to classical health monitoring, performed by human operators, to decrease the response time to a malfunction. In this paper, we present a generic model-based diagnosis system, which detects faults by analyzing the spacecraft’s housekeeping data. The observed behavior of the spacecraft components, given by the housekeeping data is compared to their expected behavior, obtained through simulation. Each discrepancy between the observed and the expected behavior of a component generates a so-called symptom. Given the symptoms, the diagnoses are derived by computing sets of components whose malfunction might cause the observed discrepancies. We demonstrate the applicability of the diagnosis system by using modified housekeeping data of the qualification model of an actual spacecraft and outline the advantages and drawbacks of our approach.
Aerospace
2226-4310
10.3390/aerospace6100105
urn:nbn:de:bvb:20-opus-198836
Aerospace 2019, 6(10), 105; https://doi.org/10.3390/aerospace6100105
false
true
CC BY: Creative-Commons-Lizenz: Namensnennung 4.0 International
Kirill Djebko
Frank Puppe
Hakan Kayal
eng
uncontrolled
fault detection
eng
uncontrolled
model-based diagnosis
eng
uncontrolled
nano-satellite
Datenverarbeitung; Informatik
open_access
Institut für Informatik
Import
Förderzeitraum 2019
Universität Würzburg
https://opus.bibliothek.uni-wuerzburg.de/files/19883/aerospace-06-00105.pdf