TY - THES A1 - Schröter, Martin T1 - Newton Methods for Image Registration T1 - Newton-Methoden zur Bildregistrierung N2 - Consider the situation where two or more images are taken from the same object. After taking the first image, the object is moved or rotated so that the second recording depicts it in a different manner. Additionally, take heed of the possibility that the imaging techniques may have also been changed. One of the main problems in image processing is to determine the spatial relation between such images. The corresponding process of finding the spatial alignment is called “registration”. In this work, we study the optimization problem which corresponds to the registration task. Especially, we exploit the Lie group structure of the set of transformations to construct efficient, intrinsic algorithms. We also apply the algorithms to medical registration tasks. However, the methods developed are not restricted to the field of medical image processing. We also have a closer look at more general forms of optimization problems and show connections to related tasks. N2 - Wir betrachten Problemstellungen, in denen zwei Bilder von ein und demselben Objekt aufgenommen wurden. Nach der ersten Aufnahme hat sich allerdings das Objekt bewegt oder deformiert, so dass es sich in den nächsten Bildern auf eine andere Weise darstellt. Zudem kann sich die Aufnahmetechnik geändert haben. Eine der Hauptprobleme in der Bildverarbeitung ist es, die räumliche Korrespondenz zwischen solchen Bildern zu bestimmen. Die zugehörige Aufgabe, eine solche räumliche Übereinstimmung zu finden, nennt man "Registrierung". In dieser Arbeit untersuchen wir das mit der Registrierung verbundene Optimierungsproblem. Insbesondere nutzen wir die Lie-Gruppen-Struktur der Menge der zulässigen Transformationen aus, um effiziente, intrinsische Argorithmen zu entwickeln. Wir wenden diese dann auf Probleme der medizinischen Bildregistrierung an, jedoch sind unsere Methoden nicht auf dieses Feld beschränkt. Wir werfen auch einen genaueren Blick auf eine allgemeinere Form von Optimierungsproblemen und zeigen Verknüpfungen zu verwandten Fragestellungen auf. KW - Newton-Verfahren KW - Registrierung KW - Stochastische Optimierung KW - Newton Methods KW - Image Registration KW - Stochastic Algorithms KW - Optimization on Lie Groups Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-71490 ER - TY - THES A1 - Oechsner, Simon T1 - Performance Challenges and Optimization Potential of Peer-to-Peer Overlay Technologies T1 - Leistungsanforderungen und Optimierungspotential von Peer-to-Peer Overlay-Technologien N2 - In today's Internet, building overlay structures to provide a service is becoming more and more common. This approach allows for the utilization of client resources, thus being more scalable than a client-server model in this respect. However, in these architectures the quality of the provided service depends on the clients and is therefore more complex to manage. Resource utilization, both at the clients themselves and in the underlying network, determine the efficiency of the overlay application. Here, a trade-off exists between the resource providers and the end users that can be tuned via overlay mechanisms. Thus, resource management and traffic management is always quality-of-service management as well. In this monograph, the three currently significant and most widely used overlay types in the Internet are considered. These overlays are implemented in popular applications which only recently have gained importance. Thus, these overlay networks still face real-world technical challenges which are of high practical relevance. We identify the specific issues for each of the considered overlays, and show how their optimization affects the trade-offs between resource efficiency and service quality. Thus, we supply new insights and system knowledge that is not provided by previous work. N2 - Im heutigen Internet werden immer häufiger Overlay-Strukturen aufgebaut, um eine Dienstleistung zu erbringen. Dieser Ansatz ermöglicht die Nutzung von Client-Ressourcen, so dass er in dieser Hinsicht besser skaliert als das Client-Server-Modell. Die Qualität des zur Verfügung gestellten Dienstes hängt nun aber von den Clients ab und ist daher komplizierter zu steuern. Die Ressourcennutzung, sowohl auf den Clients selbst als auch in dem zugrunde liegenden Netzwerk, bestimmt die Effizienz der Overlay-Anwendung. Hier existiert ein Trade-off zwischen Ressourcen-Anbietern und Endkunden, der über Overlay-Mechanismen geregelt werden kann. Daher ist Ressourcenmanagement und Traffic-Management gleichzeitig immer auch Quality-of-Service-Management. In dieser Arbeit werden die drei derzeit am weitesten im Internet verbreiteten und signifikanten Overlay-Typen berücksichtigt. Diese Overlays sind in populären Anwendungen, die erst vor kurzem an Bedeutung gewonnen haben, implementiert. Daher sind diese Overlay-Netze nach wie vor realen technischen Herausforderungen ausgesetzt, die von hoher praktischer Relevanz sind. Die spezifischen Herausforderungen für jedes der betrachteten Overlays werden identifiziert und es wird gezeigt, wie deren Optimierung den Trade-off zwischen Ressourceneffizienz und Service-Qualität beeinflusst. So werden neue Einsichten und Erkenntnisse über diese Systeme gewonnen, die in früheren Arbeiten nicht existieren. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/10 KW - Overlay-Netz KW - Peer-to-Peer-Netz KW - Leistungsbewertung KW - Overlays KW - Peer-to-Peer KW - Performance Evaluation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50015 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Rule-based Statistical Classifier for Determining a Base Text and Ranking Witnesses In Textual Documents Collation Process N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a histogram. KW - Textvergleich KW - Text Mining KW - Gothenburg Modell KW - Bayes-Klassifikator KW - Textual document collation KW - Base text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-57465 ER - TY - THES A1 - Henjes, Robert T1 - Performance Evaluation of Publish/Subscribe Middleware Architectures T1 - Leistungsuntersuchung von Publish/Subscribe Middleware Architekturen N2 - While developing modern applications, it is necessary to ensure an efficient and performant communication between different applications. In current environments, a middleware software is used, which supports the publish/subscribe communication pattern. Using this communication pattern, a publisher sends information encapsulated in messages to the middleware. A subscriber registers its interests at the middleware. The monograph describes three different steps to determine the performance of such a system. In a first step, the message throughput performance of a publish/subscribe in different scenarios is measured using a Java Message Service (JMS) based implementation. In the second step the maximum achievable message throughput is described by adapted models depending on the filter complexity and the replication grade. Using the model, the performance characteristics of a specific system in a given scenario can be determined. These numbers are used for the queuing model described in the third part of the thesis, which supports the dimensioning of a system in realistic scenarios. Additionally, we introduce a method to approximate an M/G/1 system numerically in an efficient way, which can be used for real time analysis to predict the expected performance in a certain scenario. Finally, the analytical model is used to investigate different possibilities to ensure the scalability of the maximum achievable message throughput of the overall system. N2 - Bei der Entwicklung moderner Applikationen ist es notwendig eine effiziente und performante Kommunikation zwischen den einzelnen Anwendungen sicherzustellen. In der Praxis kommt dabei eine Middleware Software zum Einsatz, die das Publish/Subscribe Kommunikationsmuster unterstützt. Dabei senden Publisher Informationen in Form von Nachrichten an die Middleware. Die Subscriber hingegen zeigen durch die Nutzung von Filtern der Middleware an, welche Informationen zugestellt werden sollen. Die Arbeit beschreibt ein dreistufiges Verfahren zur Leistungsbestimmung eines solchen Systems. Zunächst wird durch Messung die Leistung von Publish/Subscribe Systemen in verschiedenen Szenarien untersucht am Beispiel von Java Message Service (JMS) basierten Implementierungen. Danach wird der maximale Nachrichtendurchsatz in Abhängigkeit der Filterkomplexität und des Nachrichtenreplikationsgrades durch einfache Modelle beschrieben. Damit können die Leistungskennwerte für ein System und vorgegebenen Randbedingungen beschrieben werden. Im dritten Teil wird mittels Leistungsbewertung und durch Anwendung eines Warteschlangenmodells die Leistung in praxisnahen Umfeld beschrieben, so dass eine Dimensionierung möglich wird. Zusätzlich wird ein mathematisch, approximatives Verfahren vorgestellt, um ein M/G/1 System numerisch effizient berechnen zu können, was bei der Echtzeitbewertung eines Systems zur Leistungsvorhersage benutzt werden kann. Des Weiteren werden mittels des Modells Möglichkeiten untersucht die Skalierbarkeit des Gesamtsystems in Bezug auf den Nachrichtendurchsatz sicherzustellen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 04/10 KW - Middleware KW - Publish-Subscribe-System KW - Java Message Service KW - Leistungsbewertung KW - Middleware KW - Publish-Subscribe-System KW - Java Message Service KW - Performance Evaluation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-53388 ER - TY - CHAP ED - Kolla, Reiner T1 - 9. Fachgespräch Sensornetze der GI/ITG Fachgruppe Kommunikation und Verteilte Systeme N2 - Jährliches Fachgespräch zu Sensornetzen der GI/ITG Fachgruppe Kommunikation und Verteilte Systeme, 16. - 17. September 2010, Universität Würzburg KW - Drahtloses Sensorsystem KW - Fachgespräch KW - Aufsatzsammlung KW - sensor network KW - wireless network Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51106 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Understanding, Retention, and Dissemination of Religious Texts Knowledge with Modeling, and Visualization Techniques: The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning. KW - Wissensmanagement KW - Koran KW - Knowledge Modeling KW - Meta-model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55927 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of Architectures for Interactive Textual Documents Collation Systems N2 - One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced. KW - Softwarearchitektur KW - Textvergleich KW - service based software architecture KW - service brokerage KW - interactive collation of textual variants KW - Gothenburg model of collation process Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-56601 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Analysis and Understanding of Quran Search Results with Interactive Scatter Plots and Tables N2 - The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side. KW - Text Mining KW - Visualisierung KW - Koran KW - Information Visualization KW - Visual Text Mining KW - Scatter Plot KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55840 ER - TY - THES A1 - Spoerhase, Joachim T1 - Competitive and Voting Location T1 - Kompetitive und präferenzbasierte Standortprobleme N2 - We consider competitive location problems where two competing providers place their facilities sequentially and users can decide between the competitors. We assume that both competitors act non-cooperatively and aim at maximizing their own benefits. We investigate the complexity and approximability of such problems on graphs, in particular on simple graph classes such as trees and paths. We also develop fast algorithms for single competitive location problems where each provider places a single facilty. Voting location, in contrast, aims at identifying locations that meet social criteria. The provider wants to satisfy the users (customers) of the facility to be opened. In general, there is no location that is favored by all users. Therefore, a satisfactory compromise has to be found. To this end, criteria arising from voting theory are considered. The solution of the location problem is understood as the winner of a virtual election among the users of the facilities, in which the potential locations play the role of the candidates and the users represent the voters. Competitive and voting location problems turn out to be closely related. N2 - Wir betrachten kompetitive Standortprobleme, bei denen zwei konkurrierende Anbieter ihre Versorger sequenziell platzieren und die Kunden sich zwischen den Konkurrenten entscheiden können. Wir nehmen an, dass beide Konkurrenten nicht-kooperativ agieren und auf die Maximierung ihres eigenen Vorteils abzielen. Wir untersuchen die Komplexität und Approximierbarkeit solcher Probleme auf Graphen, insbesondere auf einfachen Graphklassen wie Bäumen und Pfaden. Ferner entwickeln wir schnelle Algorithmen für kompetitive Einzelstandortprobleme, bei denen jeder Anbieter genau einen Versorger errichtet. Im Gegensatz dazu geht es bei Voting-Standortproblemen um die Bestimmung eines Standorts, der die Benutzer oder Kunden soweit wie möglich zufrieden stellt. Solche Fragestellungen sind beispielsweise bei der Planung öffentlicher Einrichtungen relevant. In den meisten Fällen gibt es keinen Standort, der von allen Benutzern favorisiert wird. Daher muss ein Kompromiss gefunden werden. Hierzu werden Kriterien betrachtet, die auch in Wahlsystemen eingesetzt werden: Ein geeigneter Standort wird als Sieger einer gedachten Wahl verstanden, bei der die möglichen Standorte die zur Wahl stehenden Kandidaten und die Kunden die Wähler darstellen. Kompetitive Standortprobleme und Voting-Standortprobleme erweisen sich als eng miteinander verwandt. KW - Standortproblem KW - NP-hartes Problem KW - Approximationsalgorithmus KW - Graph KW - Effizienter Algorithmus KW - competitive location KW - voting location KW - NP-hardness KW - approximation algorithm KW - efficient algorithm KW - graph KW - tree KW - graph decomposition Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-52978 ER - TY - THES A1 - Zeiger, Florian T1 - Internet Protocol based networking of mobile robots T1 - Internet Protokoll basierte Vernetzung von mobilen Robotern N2 - This work is composed of three main parts: remote control of mobile systems via Internet, ad-hoc networks of mobile robots, and remote control of mobile robots via 3G telecommunication technologies. The first part gives a detailed state of the art and a discussion of the problems to be solved in order to teleoperate mobile robots via the Internet. The focus of the application to be realized is set on a distributed tele-laboratory with remote experiments on mobile robots which can be accessed world-wide via the Internet. Therefore, analyses of the communication link are used in order to realize a robust system. The developed and implemented architecture of this distributed tele-laboratory allows for a smooth access also with a variable or low link quality. The second part covers the application of ad-hoc networks for mobile robots. The networking of mobile robots via mobile ad-hoc networks is a very promising approach to realize integrated telematic systems without relying on preexisting communication infrastructure. Relevant civilian application scenarios are for example in the area of search and rescue operations where first responders are supported by multi-robot systems. Here, mobile robots, humans, and also existing stationary sensors can be connected very fast and efficient. Therefore, this work investigates and analyses the performance of different ad-hoc routing protocols for IEEE 802.11 based wireless networks in relevant scenarios. The analysis of the different protocols allows for an optimization of the parameter settings in order to use these ad-hoc routing protocols for mobile robot teleoperation. Also guidelines for the realization of such telematics systems are given. Also traffic shaping mechanisms of application layer are presented which allow for a more efficient use of the communication link. An additional application scenario, the integration of a small size helicopter into an IP based ad-hoc network, is presented. The teleoperation of mobile robots via 3G telecommunication technologies is addressed in the third part of this work. The high availability, high mobility, and the high bandwidth provide a very interesting opportunity to realize scenarios for the teleoperation of mobile robots or industrial remote maintenance. This work analyses important parameters of the UMTS communication link and investigates also the characteristics for different data streams. These analyses are used to give guidelines which are necessary for the realization of or industrial remote maintenance or mobile robot teleoperation scenarios. All the results and guidelines for the design of telematic systems in this work were derived from analyses and experiments with real hardware. N2 - Diese Arbeit gliedert sich in drei Hauptteile: Fernsteuerung mobiler Systeme über das Internet, ad-hoc Netzwerke mobiler Roboter und Fernsteuerung mobiler Roboter über Mobilfunktechnologien der 3. Generation. Im ersten Teil werden ein ausführlicher Stand der Technik und eine Diskussion der bei der Fernsteuerung mobiler Roboter über das Internet zu lösenden Probleme gegeben. Der Fokus der zu realisierenden Anwendung in diesem Teil der Arbeit liegt auf einem verteilten Tele-Labor mit Experimenten zu mobilen Robotern, welche über das Internet weltweit zugänglich sind. Hierzu werden Link-Analysen der zugrundeliegenden Kommunikationsinfrastruktur zu Hilfe genommen, um ein robustes System zu realisieren. Die entwickelte und implementierte Architektur des verteilten Tele-Labors erlaubt einen reibungslosen Zugang auch für Verbindungen mit variabler oder schlechter Linkqualität. Im zweiten Teil werden ad-hoc Netzwerke mobiler Roboter behandelt. Die Vernetzung mobiler Roboter über mobile ad-hoc Netzwerke ist eine vielversprechende Möglichkeit um integrierte Telematiksysteme zu realisieren ohne auf zuvor existierende Infrastruktur angewiesen zu sein. Relevante Einsatzszenarien im zivilen Bereich sind zum Beispiel Such- und Rettungsszenarien, in denen die Rettungskräfte vor Ort durch vernetzte Multi-Roboter Systeme unterstütz werden. Hier werden dann mobile Roboter, Menschen und gegebenenfalls auch vorhandene stationäre Sensoren schnell und effizient vernetzt. In dieser Arbeit werden dazu verschieden ad-hoc Routing-Protokolle für IEEE 802.11 basierte Drahtlosnetzwerke in relevanten Szenarien untersucht und deren Leistungsfähigkeit verglichen. Die Analyse der verschiedenen Protokolle erlaubt eine Optimierung der Parametereinstellung, um diese ad-hoc Routing-Protokolle zur Fernsteuerung mobiler Roboter nutzbar zu machen. Weiterhin werden Richtlinien zur Realisierung solcher Telematiksysteme erarbeitet und Mechanismen zur Verkehrsformung auf Applikationsebene präsentiert, die eine effizientere Nutzung der vorhandenen Kommunikationskanäle erlauben. Als weiteres Anwendungsbeispiel ist die Integration eines ferngesteuerten Kleinhubschraubers in ein IP basiertes ad-hoc Netz beschrieben. Der dritte Teil der Arbeit beschäftigt sich mit der Fernsteuerung mobiler Roboter über Mobilfunktechnologien der 3. Generation. Die hohe Verfügbarkeit der UMTS-Technologie mit der verbundenen Mobilität und der gleichzeitigen hohen Bandbreite bietet hier eine interessante Möglichkeit um die Fernsteuerung mobiler Roboter oder auch interaktive Fernwartungsszenarien zu realisieren. In der vorliegenden Arbeit werden wichtige Parameter der UMTS Verbindung analysiert und auch die Charakteristiken der Verbindung für verschiedene Verkehrsströme ermittelt. Diese dienen dann zur Erstellung von Richtlinien, die zur Umsetzung der interaktive Fernwartungsszenarien oder auch der Fernsteuerung mobiler Roboter nötig sind. Die in dieser Arbeit erstellten Richtlinien zum Entwurf von Telematiksystemen wurden aus Analysen und Experimenten mit realer Hardware abgeleitet. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 4 KW - Robotik KW - Mobiler Roboter KW - Fernsteuerung KW - vernetzte Roboter KW - Telematik KW - Fernsteuerung KW - Robotik KW - Internet Protokoll KW - networked robotics KW - telematics KW - remote control KW - robotics KW - internet protocol Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54776 SN - 978-3-923959-59-4 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran N2 - Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents. KW - Text Mining KW - Maschinelles Lernen KW - text categorization KW - Bayesian classifier KW - distance-based classifier KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54712 ER - TY - THES A1 - Sauer, Markus T1 - Mixed-Reality for Enhanced Robot Teleoperation T1 - Mixed-Reality zur verbesserten Fernbedienung von Robotern N2 - In den letzten Jahren ist die Forschung in der Robotik soweit fortgeschritten, dass die Mensch-Maschine Schnittstelle zunehmend die kritischste Komponente für eine hohe Gesamtperformanz von Systemen zur Navigation und Koordination von Robotern wird. In dieser Dissertation wird untersucht wie Mixed-Reality Technologien für Nutzerschnittstellen genutzt werden können, um diese Gesamtperformanz zu erhöhen. Hierzu werden Konzepte und Technologien entwickelt, die durch Evaluierung mit Nutzertest ein optimiertes und anwenderbezogenes Design von Mixed-Reality Nutzerschnittstellen ermöglichen. Er werden somit sowohl die technische Anforderungen als auch die menschlichen Faktoren für ein konsistentes Systemdesign berücksichtigt. Nach einer detaillierten Problemanalyse und der Erstellung eines Systemmodels, das den Menschen als Schlüsselkomponente mit einbezieht, wird zunächst die Anwendung der neuartigen 3D-Time-of-Flight Kamera zur Navigation von Robotern, aber auch für den Einsatz in Mixed-Reality Schnittstellen analysiert und optimiert. Weiterhin wird gezeigt, wie sich der Netzwerkverkehr des Videostroms als wichtigstes Informationselement der meisten Nutzerschnittstellen für die Navigationsaufgabe auf der Netzwerk Applikationsebene in typischen Multi-Roboter Netzwerken mit dynamischen Topologien und Lastsituation optimieren lässt. Hierdurch ist es möglich in sonst in sonst typischen Ausfallszenarien den Videostrom zu erhalten und die Bildrate zu stabilisieren. Diese fortgeschrittenen Technologien werden dann auch dem entwickelten Konzept der generischen 3D Mixed Reality Schnittselle eingesetzt. Dieses Konzept ermöglicht eine integrierte 3D Darstellung der verfügbaren Information, so dass räumliche Beziehungen von Informationen aufrechterhalten werden und somit die Anzahl der mentalen Transformationen beim menschlichen Bediener reduziert wird. Gleichzeitig werden durch diesen Ansatz auch immersive Stereo Anzeigetechnologien unterstützt, welche zusätzlich das räumliche Verständnis der entfernten Situation fördern. Die in der Dissertation vorgestellten und evaluierten Ansätze nutzen auch die Tatsache, dass sich eine lokale Autonomie von Robotern heute sehr robust realisieren lässt. Dies wird zum Beispiel zur Realisierung eines Assistenzsystems mit variabler Autonomie eingesetzt. Hierbei erhält der Fernbediener über eine Kraftrückkopplung kombiniert mit einer integrierten Augmented Reality Schnittstelle, einen Eindruck über die Situation am entfernten Arbeitsbereich, aber auch über die aktuelle Navigationsintention des Roboters. Die durchgeführten Nutzertests belegen die signifikante Steigerung der Navigationsperformanz durch den entwickelten Ansatz. Die robuste lokale Autonomie ermöglicht auch den in der Dissertation eingeführten Ansatz der prädiktiven Mixed-Reality Schnittstelle. Die durch diesen Ansatz entkoppelte Regelschleife über den Menschen ermöglicht es die Sichtbarkeit von unvermeidbaren Systemverzögerungen signifikant zu reduzieren. Zusätzlich können durch diesen Ansatz beide für die Navigation hilfreichen Blickwinkel in einer 3D-Nutzerschnittstelle kombiniert werden – der exozentrische Blickwinkel und der egozentrische Blickwinkel als Augmented Reality Sicht. N2 - With the progress in robotics research the human machine interfaces reach more and more the status of being the major limiting factor for the overall system performance of a system for remote navigation and coordination of robots. In this monograph it is elaborated how mixed reality technologies can be applied for the user interfaces in order to increase the overall system performance. Concepts, technologies, and frameworks are developed and evaluated in user studies which enable for novel user-centered approaches to the design of mixed-reality user interfaces for remote robot operation. Both the technological requirements and the human factors are considered to achieve a consistent system design. Novel technologies like 3D time-of-flight cameras are investigated for the application in the navigation tasks and for the application in the developed concept of a generic mixed reality user interface. In addition it is shown how the network traffic of a video stream can be shaped on application layer in order to reach a stable frame rate in dynamic networks. The elaborated generic mixed reality framework enables an integrated 3D graphical user interface. The realized spatial integration and visualization of available information reduces the demand for mental transformations for the human operator and supports the use of immersive stereo devices. The developed concepts make also use of the fact that local robust autonomy components can be realized and thus can be incorporated as assistance systems for the human operators. A sliding autonomy concept is introduced combining force and visual augmented reality feedback. The force feedback component allows rendering the robot's current navigation intention to the human operator, such that a real sliding autonomy with seamless transitions is achieved. The user-studies prove the significant increase in navigation performance by application of this concept. The generic mixed reality user interface together with robust local autonomy enables a further extension of the teleoperation system to a short-term predictive mixed reality user interface. With the presented concept of operation, it is possible to significantly reduce the visibility of system delays for the human operator. In addition, both advantageous characteristics of a 3D graphical user interface for robot teleoperation- an exocentric view and an augmented reality view – can be combined. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 5 KW - Mobiler Roboter KW - Autonomer Roboter KW - Mensch-Maschine-Schnittstelle KW - Mixed Reality KW - Mensch-Roboter-Interaktion KW - Teleoperation KW - Benutzerschnittstelle KW - Robotik KW - Mensch-Maschine-System KW - Human-Robot-Interaction KW - Teleoperation KW - User Interface Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55083 SN - 978-3-923959-67-9 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Markup overlap: Improving Fragmentation Method N2 - Overlapping is a common word used to describe documents whose structural dimensions cannot be adequately represented using tree structure. For instance a quotation that starts in one verse and ends in another verse. The problem of overlapping hierarchies is a recurring one, which has been addressed by a variety of approaches. There are XML based solutions as well as Non-XML ones. The XML-based solutions are: multiple documents, empty elements, fragmentation, out-of-line markup, JITT and BUVH. And the Non-XML approaches comprise CONCUR/XCONCUR, MECS, LMNL ...etc. This paper presents shortly state-of-the-art in overlapping hierarchies, and introduces two variations on the TEI fragmentation markup that have several advantages. KW - XML KW - Überlappung KW - Fragmentierung KW - XML KW - Overlapping KW - Fragmentation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49084 ER - TY - THES A1 - Klein, Alexander T1 - Performance Issues of MAC and Routing Protocols in Wireless Sensor Networks T1 - Leistungsbeschränkende Faktoren von MAC und Routingprotokollen in drahtlosen Sensornetzen N2 - The focus of this work lies on the communication issues of Medium Access Control (MAC) and routing protocols in the context of WSNs. The communication challenges in these networks mainly result from high node density, low bandwidth, low energy constraints and the hardware limitations in terms of memory, computational power and sensing capabilities of low-power transceivers. For this reason, the structure of WSNs is always kept as simple as possible to minimize the impact of communication issues. Thus, the majority of WSNs apply a simple one hop star topology since multi-hop communication has high demands on the routing protocol since it increases the bandwidth requirements of the network. Moreover, medium access becomes a challenging problem due to the fact that low-power transceivers are very limited in their sensing capabilities. The first contribution is represented by the Backoff Preamble-based MAC Protocol with Sequential Contention Resolution (BPS-MAC) which is designed to overcome the limitations of low-power transceivers. Two communication issues, namely the Clear Channel Assessment (CCA) delay and the turnaround time, are directly addressed by the protocol. The CCA delay represents the period of time which is required by the transceiver to detect a busy radio channel while the turnaround time specifies the period of time which is required to switch between receive and transmit mode. Standard Carrier Sense Multiple Access (CSMA) protocols do not achieve high performance in terms of packet loss if the traffic is highly correlated due to the fact that the transceiver is not able to sense the medium during the switching phase. Therefore, a node may start to transmit data while another node is already transmitting since it has sensed an idle medium right before it started to switch its transceiver from receive to transmit mode. The BPS-MAC protocol uses a new sequential preamble-based medium access strategy which can be adapted to the hardware capabilities of the transceivers. The protocol achieves a very low packet loss rate even in wireless networks with high node density and event-driven traffic without the need of synchronization. This makes the protocol attractive to applications such as structural health monitoring, where event suppression is not an option. Moreover, acknowledgments or complex retransmission strategies become almost unnecessary since the sequential preamble-based contention resolution mechanism minimizes the collision probability. However, packets can still be lost as a consequence of interference or other issues which affect signal propagation. The second contribution consists of a new routing protocol which is able to quickly detect topology changes without generating a large amount of overhead. The key characteristics of the Statistic-Based Routing (SBR) protocol are high end-to-end reliability (in fixed and mobile networks), load balancing capabilities, a smooth continuous routing metric, quick adaptation to changing network conditions, low processing and memory requirements, low overhead, support of unidirectional links and simplicity. The protocol can establish routes in a hybrid or a proactive mode and uses an adaptive continuous routing metric which makes it very flexible in terms of scalability while maintaining stable routes. The hybrid mode is optimized for low-power WSNs since routes are only established on demand. The difference of the hybrid mode to reactive routing strategies is that routing messages are periodically transmitted to maintain already established routes. However, the protocol stops the transmission of routing messages if no data packets are transmitted for a certain time period in order to minimize the routing overhead and the energy consumption. The proactive mode is designed for high data rate networks which have less energy constraints. In this mode, the protocol periodically transmits routing messages to establish routes in a proactive way even in the absence of data traffic. Thus, nodes in the network can immediately transmit data since the route to the destination is already established in advance. In addition, a new delay-based routing message forwarding strategy is introduced. The forwarding strategy is part of SBR but can also be applied to many routing protocols in order to modify the established topology. The strategy can be used, e.g. in mobile networks, to decrease the packet loss by deferring routing messages with respect to the neighbor change rate. Thus, nodes with a stable neighborhood forward messages faster than nodes within a fast changing neighborhood. As a result, routes are established through nodes with correlated movement which results in fewer topology changes due to higher link durations. N2 - Im Rahmen dieser Arbeit werden leistungsbeschränkende Faktoren von Medium Access Control (MAC) und Routingprotokollen im Kontext von drahtlosen Sensornetzen untersucht. Zunächst werden typische Probleme des Funkkanals diskutiert. Anschließend führen eine Einteilung von MAC Protokollen, sowie eine Gegenüberstellung relevanter Protokolle in die Thematik ein. Daraufhin werden hardwarelimitierende Faktoren und deren Auswirkung auf die Effizienz von Kanalzugriffsprotokollen untersucht. Des Weiteren wird das vom Autor entwickelte Backoff Preamble-based MAC Protokoll (BPS-MAC) vorgestellt, welches auf die limitierten Fähigkeiten sensortypischer Hardware eingeht und für dichte Sensornetze mit korreliertem Datenverkehr optimiert ist. Ein weiterer Schwerpunkt dieser Arbeit stellt das Thema Routing dar. Hier wird ebenfalls mit einer Einteilung der Protokolle in die Thematik eingeführt. Darüber hinaus werden die wichtigsten Aufgaben von Routingprotokollen vorgestellt. Ein Überblick über häufig verwendete Routingmetriken und Routingprotokolle schließen die Einführung in diesen Themenkomplex ab. Abschließend wird das im Rahmen der Dissertation entwickelte Statistic-Based-Routing (SBR) Protokoll vorgestellt, welches ebenfalls für drahtlose Sensornetze optimiert ist. Der letzte Schwerpunkt beschreibt die Problematik der Leistungsbewertung von Routingprotokollen hinsichtlich klassischer Leistungsparameter wie Paketverlust und Verzögerung. Ebenfalls werden weitere Leistungsparameter wie zum Beispiel die vom Nutzer wahrgenommene Netzqualität genauer untersucht. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 03/10 KW - Routing KW - Drahtloses Sensorsystem KW - Leistungsbewertung KW - Diskrete Simulation KW - MAC KW - Kanalzugriff KW - Medium KW - MAC KW - routing KW - sensor KW - networks KW - simulation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-52870 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Understanding the Vex Rendering Engine N2 - The Visual Editor for XML (Vex)[1] used by TextGrid [2]and other applications has got rendering and layout engines. The layout engine is well documented but the rendering engine is not. This lack of documenting the rendering engine has made refactoring and extending the editor hard and tedious. For instance many CSS2.1 and upcoming CSS3 properties have not been implemented. Software developers in different projects such as TextGrid using Vex would like to update its CSS rendering engine in order to provide advanced user interfaces as well as support different document types. In order to minimize the effort of extending Vex functionality, I found it beneficial to write a basic documentation about Vex software architecture in general and its CSS rendering engine in particular. The documentation is mainly based on the idea of architectural layered diagrams. In fact layered diagrams can help developers understand software’s source code faster and easier in order to alter it, and fix errors. This paper is written for the purpose of providing direct support for exploration in the comprehension process of Vex source code. It discusses Vex software architecture. The organization of packages that make up the software, the architecture of its CSS rendering engine, an algorithm explaining the working principle of its rendering engine are described. KW - Cascading Style Sheets KW - Softwarearchitektur KW - CSS KW - Processing model KW - Software architecture KW - Software design Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51333 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Reference Architecture, Design of Cascading Style Sheets Processing Model N2 - The technique of using Cascading Style Sheets (CSS) to format and present structured data is called CSS processing model. For instance a CSS processing model for XML documents describes steps involved in formatting and presenting XML documents on screens or papers. Many software applications such as browsers and XML editors have their own CSS processing models which are part of their rendering engines. For instance each browser based on its CSS processing model renders CSS layout differently, as a result an inconsistency in the support of CSS features arises. Some browsers support more CSS features than others, and the rendering itself varies. Moreover the W3C standards are not even adhered by some browsers such as Internet Explorer. Test suites and other hacks and filters cannot definitely solve these problems, because these solutions are temporary and fragile. To palliate this inconsistency and browser compatibility issues with respect to CSS, a reference CSS processing model is needed. By extension it could even allow interoperability across CSS rendering engines. A reference architecture would provide common software architecture and interfaces, and facilitate refactoring, reuse, and automated unit testing. In [2] a reference architecture for browsers has been proposed. However this reference architecture is a macro reference model which does not consider separately individual components of rendering and layout engines. In this paper an attempt to develop a reference architecture for CSS processing models is discussed. In addition the Vex editor [3] rendering and layout engines, as well as an extended version of the editor used in TextGrid project [5] are also presented in order to validate the proposed reference architecture. KW - Cascading Style Sheets KW - XML KW - Softwarearchitektur KW - CSS KW - XML KW - Processing Model KW - Reference Architecture Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51328 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Empirical Study on Screen Scraping Web Service Creation: Case of Rhein-Main-Verkehrsverbund (RMV) N2 - Internet is the biggest database that science and technology have ever produced. The world wide web is a large repository of information that cannot be used for automation by many applications due to its limited target audience. One of the solutions to the automation problem is to develop wrappers. Wrapping is a process whereby unstructured extracted information is transformed into a more structured one such as XML, which could be provided as webservice to other applications. A web service is a web page whose content is well structured so that a computer program can consume it automatically. This paper describes steps involved in constructing wrappers manually in order to automatically generate web services. KW - HTML KW - XML KW - Wrapper KW - Web service KW - HTML KW - XML KW - Wrapper KW - Web service Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49396 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Java Web Frameworks Which One to Choose? N2 - This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment. KW - Java Frameworks KW - MVC KW - Struts KW - Spring KW - JSF Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49407 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Doing Webservices Composition by Content-based Mashup: Example of a Web-based Simulator for Itinerary Planning N2 - Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany. KW - Mashup KW - Wrapper KW - Mashup KW - Webservice Composition KW - Wrappers Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50036 ER - TY - THES A1 - Aschenbrenner, Doris T1 - Human Robot Interaction Concepts for Human Supervisory Control and Telemaintenance Applications in an Industry 4.0 Environment T1 - Mensch-Roboter-Interaktionskonzepte für Fernsteuerungs- und Fernwartungsanwendungen in einer Industrie 4.0 Umgebung N2 - While teleoperation of technical highly sophisticated systems has already been a wide field of research, especially for space and robotics applications, the automation industry has not yet benefited from its results. Besides the established fields of application, also production lines with industrial robots and the surrounding plant components are in need of being remotely accessible. This is especially critical for maintenance or if an unexpected problem cannot be solved by the local specialists. Special machine manufacturers, especially robotics companies, sell their technology worldwide. Some factories, for example in emerging economies, lack qualified personnel for repair and maintenance tasks. When a severe failure occurs, an expert of the manufacturer needs to fly there, which leads to long down times of the machine or even the whole production line. With the development of data networks, a huge part of those travels can be omitted, if appropriate teleoperation equipment is provided. This thesis describes the development of a telemaintenance system, which was established in an active production line for research purposes. The customer production site of Braun in Marktheidenfeld, a factory which belongs to Procter & Gamble, consists of a six-axis cartesian industrial robot by KUKA Industries, a two-component injection molding system and an assembly unit. The plant produces plastic parts for electric toothbrushes. In the research projects "MainTelRob" and "Bayern.digital", during which this plant was utilised, the Zentrum für Telematik e.V. (ZfT) and its project partners develop novel technical approaches and procedures for modern telemaintenance. The term "telemaintenance" hereby refers to the integration of computer science and communication technologies into the maintenance strategy. It is particularly interesting for high-grade capital-intensive goods like industrial robots. Typical telemaintenance tasks are for example the analysis of a robot failure or difficult repair operations. The service department of KUKA Industries is responsible for the worldwide distributed customers who own more than one robot. Currently such tasks are offered via phone support and service staff which travels abroad. They want to expand their service activities on telemaintenance and struggle with the high demands of teleoperation especially regarding security infrastructure. In addition, the facility in Marktheidenfeld has to keep up with the high international standards of Procter & Gamble and wants to minimize machine downtimes. Like 71.6 % of all German companies, P&G sees a huge potential for early information on their production system, but complains about the insufficient quality and the lack of currentness of data. The main research focus of this work lies on the human machine interface for all human tasks in a telemaintenance setup. This thesis provides own work in the use of a mobile device in context of maintenance, describes new tools on asynchronous remote analysis and puts all parts together in an integrated telemaintenance infrastructure. With the help of Augmented Reality, the user performance and satisfaction could be raised. A special regard is put upon the situation awareness of the remote expert realized by different camera viewpoints. In detail the work consists of: - Support of maintenance tasks with a mobile device - Development and evaluation of a context-aware inspection tool - Comparison of a new touch-based mobile robot programming device to the former teach pendant - Study on Augmented Reality support for repair tasks with a mobile device - Condition monitoring for a specific plant with industrial robot - Human computer interaction for remote analysis of a single plant cycle - A big data analysis tool for a multitude of cycles and similar plants - 3D process visualization for a specific plant cycle with additional virtual information - Network architecture in hardware, software and network infrastructure - Mobile device computer supported collaborative work for telemaintenance - Motor exchange telemaintenance example in running production environment - Augmented reality supported remote plant visualization for better situation awareness N2 - Die Fernsteuerung technisch hochentwickelter Systeme ist seit vielen Jahren ein breites Forschungsfeld, vor allem im Bereich von Weltraum- und Robotikanwendungen. Allerdings hat die Automatisierungsindustrie bislang zu wenig von den Ergebnissen dieses Forschungsgebiets profitiert. Auch Fertigungslinien mit Industrierobotern und weiterer Anlagenkomponenten müssen über die Ferne zugänglich sein, besonders bei Wartungsfällen oder wenn unvorhergesehene Probleme nicht von den lokalen Spezialisten gelöst werden können. Hersteller von Sondermaschinen wie Robotikfirmen verkaufen ihre Technologie weltweit. Kunden dieser Firmen besitzen beispielsweise Fabriken in Schwellenländern, wo es an qualifizierten Personal für Reparatur und Wartung mangelt. Wenn ein ernster Fehler auftaucht, muss daher ein Experte des Sondermaschinenherstellers zum Kunden fliegen. Das führt zu langen Stillstandzeiten der Maschine. Durch die Weiterentwicklung der Datennetze könnte ein großer Teil dieser Reisen unterbleiben, wenn eine passende Fernwartungsinfrastruktur vorliegen würde. Diese Arbeit beschreibt die Entwicklung eines Fernwartungssystems, welches in einer aktiven Produktionsumgebung für Forschungszwecke eingerichtet wurde. Die Fertigungsanlage des Kunden wurde von Procter & Gamble in Marktheidenfeld zur Verfügung gestellt und besteht aus einem sechsachsigen, kartesischen Industrieroboter von KUKA Industries, einer Zweikomponentenspritzgussanlage und einer Montageeinheit. Die Anlage produziert Plastikteile für elektrische Zahnbürsten. Diese Anlage wurde im Rahmen der Forschungsprojekte "MainTelRob" und "Bayern.digital" verwendet, in denen das Zentrum für Telematik e.V. (ZfT) und seine Projektpartner neue Ansätze und Prozeduren für moderne Fernwartungs-Technologien entwickeln. Fernwartung bedeutet für uns die umfassende Integration von Informatik und Kommunikationstechnologien in der Wartungsstrategie. Das ist vor allem für hochentwickelte, kapitalintensive Güter wie Industrierobotern interessant. Typische Fernwartungsaufgaben sind beispielsweise die Analyse von Roboterfehlermeldungen oder schwierige Reparaturmaßnahmen. Die Service-Abteilung von KUKA Industries ist für die weltweit verteilten Kunden zuständig, die teilweise auch mehr als einen Roboter besitzen. Aktuell werden derartige Aufgaben per Telefonauskunft oder mobilen Servicekräften, die zum Kunden reisen, erledigt. Will man diese komplizierten Aufgaben durch Fernwartung ersetzen um die Serviceaktivitäten auszuweiten muss man mit den hohen Anforderungen von Fernsteuerung zurechtkommen, besonders in Bezug auf Security Infrastruktur. Eine derartige umfassende Herangehensweise an Fernwartung bietet aber auch einen lokalen Mehrwert beim Kunden: Die Fabrik in Marktheidenfeld muss den hohen internationalen Standards von Procter & Gamble folgen und will daher die Stillstandzeiten weiter verringern. Wie 71,6 Prozent aller deutschen Unternehmen sieht auch P&G Marktheidenfeld ein großes Potential für frühe Informationen aus ihrem Produktionssystem, haben aber aktuell noch Probleme mit der Aktualität und Qualität dieser Daten. Der Hauptfokus der hier vorgestellten Forschung liegt auf der Mensch-Maschine-Schnittstelle für alle Aufgaben eines umfassenden Fernwartungskontextes. Diese Arbeit stellt die eigene Arbeiten bei der Verwendung mobiler Endgeräte im Kontext der Wartung und neue Softwarewerkzeuge für die asynchrone Fernanalyse vor und integriert diese Aspekte in eine Fernwartungsinfrastruktur. In diesem Kontext kann gezeigt werden, dass der Einsatz von Augmented Reality die Nutzerleistung und gleichzeitig die Zufriedenheit steigern kann. Dabei wird auf das sogenannte "situative Bewusstsein" des entfernten Experten besonders Wert gelegt. Im Detail besteht die Arbeit aus: - Unterstützung von Wartungsaufgaben mit mobilen Endgeräten - Entwicklung und Evaluation kontextsensitiver Inspektionssoftware - Vergleich von touch-basierten Roboterprogrammierung mit der Vorgängerversion des Programmierhandgeräts - Studien über die Unterstützung von Reparaturaufgaben durch Augmented Reality - Zustandsüberwachung für eine spezielle Anlage mit Industrieroboter - Mensch-Maschine Interaktion für die Teleanalyse eines Produktionszyklus - Grafische Big Data Analyse einer Vielzahl von Produktionszyklen - 3D Prozess Visualisierung und Anreicherung mit virtuellen Informationen - Hardware, Software und Netzwerkarchitektur für die Fernwartung - Computerunterstützte Zusammenarbeit mit Verwendung mobiler Endgeräte für die Fernwartung - Fernwartungsbeispiel: Durchführung eines Motortauschs in der laufenden Produktion - Augmented Reality unterstütze Visualisierung des Anlagenkontextes für die Steigerung des situativen Bewusstseins T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 13 KW - Fernwartung KW - Robotik KW - Mensch-Maschine-Schnittstelle KW - Erweiterte Realität KW - Situation Awareness KW - Industrie 4.0 KW - Industrial internet Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-150520 SN - 978-3-945459-18-8 ER - TY - THES A1 - Borrmann, Dorit T1 - Multi-modal 3D mapping - Combining 3D point clouds with thermal and color information T1 - Multi-modale 3D-Kartierung - Kombination von 3D-Punktwolken mit Thermo- und Farbinformation N2 - Imagine a technology that automatically creates a full 3D thermal model of an environment and detects temperature peaks in it. For better orientation in the model it is enhanced with color information. The current state of the art for analyzing temperature related issues is thermal imaging. It is relevant for energy efficiency but also for securing important infrastructure such as power supplies and temperature regulation systems. Monitoring and analysis of the data for a large building is tedious as stable conditions need to be guaranteed for several hours and detailed notes about the pose and the environment conditions for each image must be taken. For some applications repeated measurements are necessary to monitor changes over time. The analysis of the scene is only possible through expertise and experience. This thesis proposes a robotic system that creates a full 3D model of the environment with color and thermal information by combining thermal imaging with the technology of terrestrial laser scanning. The addition of a color camera facilitates the interpretation of the data and allows for other application areas. The data from all sensors collected at different positions is joined in one common reference frame using calibration and scan matching. The first part of the thesis deals with 3D point cloud processing with the emphasis on accessing point cloud data efficiently, detecting planar structures in the data and registering multiple point clouds into one common coordinate system. The second part covers the autonomous exploration and data acquisition with a mobile robot with the objective to minimize the unseen area in 3D space. Furthermore, the combination of different modalities, color images, thermal images and point cloud data through calibration is elaborated. The last part presents applications for the the collected data. Among these are methods to detect the structure of building interiors for reconstruction purposes and subsequent detection and classification of windows. A system to project the gathered thermal information back into the scene is presented as well as methods to improve the color information and to join separately acquired point clouds and photo series. A full multi-modal 3D model contains all the relevant geometric information about the recorded scene and enables an expert to fully analyze it off-site. The technology clears the path for automatically detecting points of interest thereby helping the expert to analyze the heat flow as well as localize and identify heat leaks. The concept is modular and neither limited to achieving energy efficiency nor restricted to the use in combination with a mobile platform. It also finds its application in fields such as archaeology and geology and can be extended by further sensors. N2 - Man stelle sich eine Technologie vor, die automatisch ein vollständiges 3D-Thermographiemodell einer Umgebung generiert und Temperaturspitzen darin erkennt. Zur besseren Orientierung innerhalb des Modells ist dieses mit Farbinformationen erweitert. In der Analyse temperaturrelevanter Fragestellungen sind Thermalbilder der Stand der Technik. Darunter fallen Energieeffizienz und die Sicherung wichtiger Infrastruktur, wie Energieversorgung und Systeme zur Temperaturregulierung. Die Überwachung und anschließende Analyse der Daten eines großen Gebäudes ist aufwändig, da über mehrere Stunden stabile Bedingungen garantiert und detaillierte Aufzeichnungen über die Aufnahmeposen und die Umgebungsverhältnisse für jedes Wärmebild erstellt werden müssen. Einige Anwendungen erfordern wiederholte Messungen, um Veränderungen über die Zeit zu beobachten. Eine Analyse der Szene ist nur mit Erfahrung und Expertise möglich. Diese Arbeit stellt ein Robotersystem vor, das durch Kombination von Thermographie mit terrestrischem Laserscanning ein vollständiges 3D Modell der Umgebung mit Farb- und Temperaturinformationen erstellt. Die ergänzende Farbkamera vereinfacht die Interpretation der Daten und eröffnet weitere Anwendungsfelder. Die an unterschiedlichen Positionen aufgenommenen Daten aller Sensoren werden durch Kalibrierung und Scanmatching in einem gemeinsamen Bezugssystem zusammengefügt. Der erste Teil der Arbeit behandelt 3D-Punktwolkenverarbeitung mit Schwerpunkt auf effizientem Punktzugriff, Erkennung planarer Strukturen und Registrierung mehrerer Punktwolken in einem gemeinsamen Koordinatensystem. Der zweite Teil beschreibt die autonome Erkundung und Datenakquise mit einem mobilen Roboter, mit dem Ziel, die bisher nicht erfassten Bereiche im 3D-Raum zu minimieren. Des Weiteren wird die Kombination verschiedener Modalitäten, Farbbilder, Thermalbilder und Punktwolken durch Kalibrierung ausgearbeitet. Den abschließenden Teil stellen Anwendungsszenarien für die gesammelten Daten dar, darunter Methoden zur Erkennung der Innenraumstruktur für die Rekonstruktion von Gebäuden und der anschließenden Erkennung und Klassifizierung von Fenstern. Ein System zur Rückprojektion der gesammelten Thermalinformation in die Umgebung wird ebenso vorgestellt wie Methoden zur Verbesserung der Farbinformationen und zum Zusammenfügen separat aufgenommener Punktwolken und Fotoreihen. Ein vollständiges multi-modales 3D Modell enthält alle relevanten geometrischen Informationen der aufgenommenen Szene und ermöglicht einem Experten, diese standortunabhängig zu analysieren. Diese Technologie ebnet den Weg für die automatische Erkennung relevanter Bereiche und für die Analyse des Wärmeflusses und vereinfacht somit die Lokalisierung und Identifikation von Wärmelecks für den Experten. Das vorgestellte modulare Konzept ist weder auf den Anwendungsfall Energieeffizienz beschränkt noch auf die Verwendung einer mobilen Plattform angewiesen. Es ist beispielsweise auch in Feldern wie der Archäologie und Geologie einsetzbar und kann durch zusätzliche Sensoren erweitert werden. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 14 KW - Punktwolke KW - Lidar KW - Thermografie KW - Robotik KW - 3D point cloud KW - Laser scanning KW - Robotics KW - 3D thermal mapping KW - Registration Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-157085 SN - 978-3-945459-20-1 SN - 1868-7474 SN - 1868-7466 ER - TY - THES A1 - Houshiar, Hamidreza T1 - Documentation and mapping with 3D point cloud processing T1 - Dokumentation und Kartierung mittels 3D-Punktwolkenverarbeitung N2 - 3D point clouds are a de facto standard for 3D documentation and modelling. The advances in laser scanning technology broadens the usability and access to 3D measurement systems. 3D point clouds are used in many disciplines such as robotics, 3D modelling, archeology and surveying. Scanners are able to acquire up to a million of points per second to represent the environment with a dense point cloud. This represents the captured environment with a very high degree of detail. The combination of laser scanning technology with photography adds color information to the point clouds. Thus the environment is represented more realistically. Full 3D models of environments, without any occlusion, require multiple scans. Merging point clouds is a challenging process. This thesis presents methods for point cloud registration based on the panorama images generated from the scans. Image representation of point clouds introduces 2D image processing methods to 3D point clouds. Several projection methods for the generation of panorama maps of point clouds are presented in this thesis. Additionally, methods for point cloud reduction and compression based on the panorama maps are proposed. Due to the large amounts of data generated from the 3D measurement systems these methods are necessary to improve the point cloud processing, transmission and archiving. This thesis introduces point cloud processing methods as a novel framework for the digitisation of archeological excavations. The framework replaces the conventional documentation methods for excavation sites. It employs point clouds for the generation of the digital documentation of an excavation with the help of an archeologist on-site. The 3D point cloud is used not only for data representation but also for analysis and knowledge generation. Finally, this thesis presents an autonomous indoor mobile mapping system. The mapping system focuses on the sensor placement planning method. Capturing a complete environment requires several scans. The sensor placement planning method solves for the minimum required scans to digitise large environments. Combining this method with a navigation system on a mobile robot platform enables it to acquire data fully autonomously. This thesis introduces a novel hole detection method for point clouds to detect obscured parts of a captured environment. The sensor placement planning method selects the next scan position with the most coverage of the obscured environment. This reduces the required number of scans. The navigation system on the robot platform consist of path planning, path following and obstacle avoidance. This guarantees the safe navigation of the mobile robot platform between the scan positions. The sensor placement planning method is designed as a stand alone process that could be used with a mobile robot platform for autonomous mapping of an environment or as an assistant tool for the surveyor on scanning projects. N2 - 3D-Punktwolken sind der de facto Standard bei der Dokumentation und Modellierung in 3D. Die Fortschritte in der Laserscanningtechnologie erweitern die Verwendbarkeit und die Verfügbarkeit von 3D-Messsystemen. 3D-Punktwolken werden in vielen Disziplinen verwendet, wie z.B. in der Robotik, 3D-Modellierung, Archäologie und Vermessung. Scanner sind in der Lage bis zu einer Million Punkte pro Sekunde zu erfassen, um die Umgebung mit einer dichten Punktwolke abzubilden und mit einem hohen Detaillierungsgrad darzustellen. Die Kombination der Laserscanningtechnologie mit Methoden der Photogrammetrie fügt den Punktwolken Farbinformationen hinzu. Somit wird die Umgebung realistischer dargestellt. Vollständige 3D-Modelle der Umgebung ohne Verschattungen benötigen mehrere Scans. Punktwolken zusammenzufügen ist eine anspruchsvolle Aufgabe. Diese Arbeit stellt Methoden zur Punktwolkenregistrierung basierend auf aus den Scans erzeugten Panoramabildern vor. Die Darstellung einer Punktwolke als Bild bringt Methoden der 2D-Bildverarbeitung an 3D-Punktwolken heran. Der Autor stellt mehrere Projektionsmethoden zur Erstellung von Panoramabildern aus 3D-Punktwolken vor. Außerdem werden Methoden zur Punktwolkenreduzierung und -kompression basierend auf diesen Panoramabildern vorgeschlagen. Aufgrund der großen Datenmenge, die von 3D-Messsystemen erzeugt wird, sind diese Methoden notwendig, um die Punktwolkenverarbeitung, -übertragung und -archivierung zu verbessern. Diese Arbeit präsentiert Methoden der Punktwolkenverarbeitung als neuartige Ablaufstruktur für die Digitalisierung von archäologischen Ausgrabungen. Durch diesen Ablauf werden konventionellen Methoden auf Ausgrabungsstätten ersetzt. Er verwendet Punktwolken für die Erzeugung der digitalen Dokumentation einer Ausgrabung mithilfe eines Archäologen vor Ort. Die 3D-Punktwolke kommt nicht nur für die Anzeige der Daten, sondern auch für die Analyse und Wissensgenerierung zum Einsatz. Schließlich stellt diese Arbeit ein autonomes Indoor-Mobile-Mapping-System mit Fokus auf der Positionsplanung des Messgeräts vor. Die Positionsplanung bestimmt die minimal benötigte Anzahl an Scans, um großflächige Umgebungen zu digitalisieren. Kombiniert mit einem Navigationssystem auf einer mobilen Roboterplattform ermöglicht diese Methode die vollautonome Datenerfassung. Diese Arbeit stellt eine neuartige Erkennungsmethode für Lücken in Punktwolken vor, um verdeckte Bereiche der erfassten Umgebung zu bestimmen. Die Positionsplanung bestimmt als nächste Scanposition diejenige mit der größten Abdeckung der verdeckten Umgebung. Das Navigationssystem des Roboters besteht aus der Pfadplanung, der Pfadverfolgung und einer Hindernisvermeidung um eine sichere Fortbewegung der mobilen Roboterplattform zwischen den Scanpositionen zu garantieren. Die Positionsplanungsmethode wurde als eigenständiges Verfahren entworfen, das auf einer mobilen Roboterplattform zur autonomen Kartierung einer Umgebung zum Einsatz kommen oder einem Vermesser bei einem Scanprojekt als Unterstützung dienen kann. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 12 KW - 3D Punktwolke KW - Robotik KW - Registrierung KW - 3D Pointcloud KW - Feature Based Registration KW - Compression KW - Computer Vision KW - Robotics KW - Panorama Images Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-144493 SN - 978-3-945459-14-0 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Using Machine Learning Algorithms for Categorizing Quranic Chaptersby Major Phases of Prophet Mohammad’s Messengership N2 - This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad’s messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and naïve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad’s life. The major phases of the Prophet’s life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad’s messengership. KW - Koran KW - Maschinelles Lernen KW - Text categorization KW - Clustering KW - Support Vector Machine KW - Naïve Bayesian KW - Place of revelation KW - Stages of Prophet Mohammad’s messengership KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66862 ER - TY - JOUR A1 - Vainshtein, Yevhen A1 - Sanchez, Mayka A1 - Brazma, Alvis A1 - Hentze, Matthias W. A1 - Dandekar, Thomas A1 - Muckenthaler, Martina U. T1 - The IronChip evaluation package: a package of perl modules for robust analysis of custom microarrays N2 - Background: Gene expression studies greatly contribute to our understanding of complex relationships in gene regulatory networks. However, the complexity of array design, production and manipulations are limiting factors, affecting data quality. The use of customized DNA microarrays improves overall data quality in many situations, however, only if for these specifically designed microarrays analysis tools are available. Results: The IronChip Evaluation Package (ICEP) is a collection of Perl utilities and an easy to use data evaluation pipeline for the analysis of microarray data with a focus on data quality of custom-designed microarrays. The package has been developed for the statistical and bioinformatical analysis of the custom cDNA microarray IronChip but can be easily adapted for other cDNA or oligonucleotide-based designed microarray platforms. ICEP uses decision tree-based algorithms to assign quality flags and performs robust analysis based on chip design properties regarding multiple repetitions, ratio cut-off, background and negative controls. Conclusions: ICEP is a stand-alone Windows application to obtain optimal data quality from custom-designed microarrays and is freely available here (see “Additional Files” section) and at: http://www.alice-dsl.net/evgeniy. vainshtein/ICEP/ KW - Microarray KW - ICEP KW - IronChip Evaluation Package Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-67869 ER - TY - THES A1 - Schlosser, Daniel T1 - Quality of Experience Management in Virtual Future Networks T1 - Netzwerkmanagement unter Berücksichtigung der vom Benutzer erfahrenen Dienstgüte in virtuellen zukünftigen Netzen N2 - Aktuell beobachten wir eine drastische Vervielfältigung der Dienste und Anwendungen, die das Internet für den Datentransport nutzen. Dabei unterscheiden sich die Anforderungen dieser Dienste an das Netzwerk deutlich. Das Netzwerkmanagement wird durch diese Diversität der nutzenden Dienste aber deutlich erschwert, da es einem Datentransportdienstleister kaum möglich ist, die unterschiedlichen Verbindungen zu unterscheiden, ohne den Inhalt der transportierten Daten zu analysieren. Netzwerkvirtualisierung ist eine vielversprechende Lösung für dieses Problem, da sie es ermöglicht für verschiedene Dienste unterschiedliche virtuelle Netze auf dem gleichen physikalischen Substrat zu betreiben. Diese Diensttrennung ermöglicht es, jedes einzelne Netz anwendungsspezifisch zu steuern. Ziel einer solchen Netzsteuerung ist es, sowohl die vom Nutzer erfahrene Dienstgüte als auch die Kosteneffizienz des Datentransports zu optimieren. Darüber hinaus wird es mit Netzwerkvirtualisierung möglich das physikalische Netz so weit zu abstrahieren, dass die aktuell fest verzahnten Rollen von Netzwerkbesitzer und Netzwerkbetreiber entkoppelt werden können. Darüber hinaus stellt Netzwerkvirtualisierung sicher, dass unterschiedliche Datennetze, die gleichzeitig auf dem gleichen physikalischen Netz betrieben werden, sich gegenseitig weder beeinflussen noch stören können. Diese Arbeit  beschäftigt sich mit ausgewählten Aspekten dieses Themenkomplexes und fokussiert sich darauf, ein virtuelles Netzwerk mit bestmöglicher Dienstqualität für den Nutzer zu betreiben und zu steuern. Dafür wird ein Top-down-Ansatz gewählt, der von den Anwendungsfällen, einer möglichen Netzwerkvirtualisierungs-Architektur und aktuellen Möglichkeiten der Hardwarevirtualisierung ausgeht. Im Weiteren fokussiert sich die Arbeit dann in Richtung Bestimmung und Optimierung der vom Nutzer erfahrenen Dienstqualität (QoE) auf Applikationsschicht und diskutiert Möglichkeiten zur Messung und Überwachung von wesentlichen Netzparametern in virtualisierten Netzen. N2 - Currently, we observe a strong growth of services and applications, which use the Internet for data transport. However, the network requirements of these applications differ significantly. This makes network management difficult, since it complicated to separate network flows into application classes without inspecting application layer data. Network virtualization is a promising solution to this problem. It enables running different virtual network on the same physical substrate. Separating networks based on the service supported within allows controlling each network according to the specific needs of the application. The aim of such a network control is to optimize the user perceived quality as well as the cost efficiency of the data transport. Furthermore, network virtualization abstracts the network functionality from the underlying implementation and facilitates the split of the currently tightly integrated roles of Internet Service Provider and network owner. Additionally, network virtualization guarantees that different virtual networks run on the same physical substrate do not interfere with each other. This thesis discusses different aspects of the network virtualization topic. It is focused on how to manage and control a virtual network to guarantee the best Quality of Experience for the user. Therefore, a top-down approach is chosen. Starting with use cases of virtual networks, a possible architecture is derived and current implementation options based on hardware virtualization are explored. In the following, this thesis focuses on assessing the Quality of Experience perceived by the user and how it can be optimized on application layer. Furthermore, options for measuring and monitoring significant network parameters of virtual networks are considered. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/12 KW - Netzwerkmanagement KW - Dienstgüte KW - Netzwerkvirtualisierung KW - QoS KW - QoE KW - Network Virtualization KW - Quality of Experience KW - Network Management KW - Quality of Service Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-69986 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Towards a Knowledge-Based Learning System for The Quranic Text N2 - In this research, an attempt to create a knowledge-based learning system for the Quranic text has been performed. The knowledge base is made up of the Quranic text along with detailed information about each chapter and verse, and some rules. The system offers the possibility to study the Quran through web-based interfaces, implementing novel visualization techniques for browsing, querying, consulting, and testing the acquired knowledge. Additionally the system possesses knowledge acquisition facilities for maintaining the knowledge base. KW - Wissensbanksystem KW - Wissensmanagement KW - Text Mining KW - Visualisierung KW - Koran KW - Knowledge-based System KW - Knowledge Management System KW - Text Mining KW - Visualization KW - Quran Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-70003 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computing Generic Causes of Revelation of the Quranic Verses Using Machine Learning Techniques N2 - Because many verses of the holy Quran are similar, there is high probability that, similar verses addressing same issues share same generic causes of revelation. In this study, machine learning techniques have been employed in order to automatically derive causes of revelation of Quranic verses. The derivation of the causes of revelation is viewed as a classification problem. Initially the categories are based on the verses with known causes of revelation, and the testing set consists of the remaining verses. Based on a computed threshold value, a naïve Bayesian classifier is used to categorize some verses. After that, using a decision tree classifier the remaining uncategorized verses are separated into verses that contain indicators (resultative connectors, causative expressions…), and those that do not. As for those verses having indicators, each one is segmented into its constituent clauses by identification of the linking indicators. Then a dominant clause is extracted and considered either as the cause of revelation, or post-processed by adding or subtracting some terms to form a causal clause that constitutes the cause of revelation. Concerning remaining unclassified verses without indicators, a naive Bayesian classifier is again used to assign each one of them to one of the existing classes based on features and topics similarity. As for verses that could not be classified so far, manual classification was made by considering each verse as a category on its own. The result obtained in this study is encouraging, and shows that automatic derivation of Quranic verses’ generic causes of revelation is achievable, and reasonably reliable for understanding and implementing the teachings of the Quran. KW - Text Mining KW - Koran KW - Text mining KW - Statistical classifiers KW - Text segmentation KW - Causes of revelation KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66083 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of a Model-driven XML-based Integrated System Architecture for Assisting Analysis, Understanding, and Retention of Religious Texts:The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. This paper discusses about modeling religious texts using semantic XML markup based on frame-based knowledge representation, with the purpose of assisting understanding, retention, and sharing of knowledge they contain. In this study, books organized in terms of chapters made up of verses are considered as the source of knowledge to model. Some metadata representing the multiple perspectives of knowledge modeling are assigned to each chapter and verse. Chapters and verses with their metadata form a meta-model, which is represented using frames, and published on a web mashup. An XML-based annotation and visualization system equipped with user interfaces for creating static and dynamic metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing generated knowledge on the Internet, has been developed. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts, in order to support analysis, understanding, and retention of the texts. KW - Wissensrepräsentation KW - Wissensmanagement KW - Content Management KW - XML KW - Koran KW - Knowledge representation KW - Meta-model KW - Frames KW - XML model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65737 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computer-based Textual Documents Collation System for Reconstructing the Original Text from Automatically Identified Base Text and Ranked Witnesses N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a bar diagram. Additionally this study includes a recursive algorithm for automatically reconstructing the original text from the identified base text and ranked witnesses. KW - Textvergleich KW - Text Mining KW - Textual document collation KW - Base text KW - Reconstruction of original text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65749 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Philosophical and Computational Approaches for Estimating and Visualizing Months of Revelations of Quranic Chapters N2 - The question of why the Quran structure does not follow its chronology of revelation is a recurring one. Some Islamic scholars such as [1] have answered the question using hadiths, as well as other philosophical reasons based on internal evidences of the Quran itself. Unfortunately till today many are still wondering about this issue. Muslims believe that the Quran is a summary and a copy of the content of a preserved tablet called Lawhul-Mahfuz located in the heaven. Logically speaking, this suggests that the arrangement of the verses and chapters is expected to be similar to that of the Lawhul-Mahfuz. As for the arrangement of the verses in each chapter, there is unanimity that it was carried out by the Prophet himself under the guidance of Angel Gabriel with the recommendation of God. But concerning the ordering of the chapters, there are reports about some divergences [3] among the Prophet’s companions as to which chapter should precede which one. This paper argues that Quranic chapters might have been arranged according to months and seasons of revelation. In fact, based on some verses of the Quran, it is defendable that the Lawhul-Mahfuz itself is understood to have been structured in terms of the months of the year. In this study, philosophical and mathematical arguments for computing chapters’ months of revelation are discussed, and the result is displayed on an interactive scatter plot. KW - Text Mining KW - Visualisierung KW - Koran KW - Text mining KW - Visualization KW - Chronology of revelation KW - Chapters arrangement KW - Quran KW - Lawhul-Mahfuz Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65784 ER - TY - THES A1 - Selbach, Stefan T1 - Hybride bitparallele Volltextsuche T1 - Hybrid Bit-parallel Full-text Search N2 - Der große Vorteil eines q-Gramm Indexes liegt darin, dass es möglich ist beliebige Zeichenketten in einer Dokumentensammlung zu suchen. Ein Nachteil jedoch liegt darin, dass bei größer werdenden Datenmengen dieser Index dazu neigt, sehr groß zu werden, was mit einem deutlichem Leistungsabfall verbunden ist. In dieser Arbeit wird eine neuartige Technik vorgestellt, die die Leistung eines q-Gramm Indexes mithilfe zusätzlicher M-Matrizen für jedes q-Gramm und durch die Kombination mit einem invertierten Index erhöht. Eine M-Matrix ist eine Bit-Matrix, die Informationen über die Positionen eines q-Gramms enthält. Auch bei der Kombination von zwei oder mehreren Q-Grammen bieten diese M-Matrizen Informationen über die Positionen der Kombination. Dies kann verwendet werden, um die Komplexität der Zusammenführung der q-Gramm Trefferlisten für eine gegebene Suchanfrage zu reduzieren und verbessert die Leistung des n-Gramm-invertierten Index. Die Kombination mit einem termbasierten invertierten Index beschleunigt die durchschnittliche Suchzeit zusätzlich und vereint die Vorteile beider Index-Formate. Redundante Informationen werden in dem q-Gramm Index reduziert und weitere Funktionalität hinzugefügt, wie z.B. die Bewertung von Treffern nach Relevanz, die Möglichkeit, nach Konzepten zu suchen oder Indexpartitionierungen nach Wichtigkeit der enthaltenen Terme zu erstellen. N2 - The major advantage of the n-gram inverted index is the possibility to locate any given substring in a document collection. Nevertheless, the n-gram inverted index also has its drawbacks: If the collections are getting bigger, this index tends to be very large and the performance drops significantly. A novel technique is proposed to enhance the performance of an n-gram inverted index by using additional m-matrixes for each n-gram and by combining it with an inverted index. An m-matrix is a bit matrix containing information about the positions of an n-gram. When combining two or more n-grams, these m-matrixes provide information about the positions of the combination. This can be used to reduce the complexity of merging the n-gram postings lists for a given search and improves the performance of the n-gram inverted index. The combination with a term based inverted index speeds up the average search time even more and combines the benefits of both index formats. Redundant information is reduced in the n-gram index and further functionality is added like the ranking of hits, the possibility to search for concepts and to create index partitions according to the relevance of the contained terms. KW - Information Retrieval KW - Information-Retrieval-System KW - Suchverfahren KW - Invertierte Liste KW - n-Gramm KW - q-Gramm KW - Volltextsuche KW - Bit Parallelität KW - Konzeptsuche KW - q-gram KW - n-gram KW - bit-parallel KW - full-text search KW - concept search Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66476 ER - TY - JOUR A1 - Schmid, Benjamin A1 - Schindelin, Johannes A1 - Cardona, Albert A1 - Longair, Martin A1 - Heisenberg, Martin T1 - A high-level 3D visualization API for Java and ImageJ N2 - Background: Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results: Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions: Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de. KW - Visualisierung KW - Java 3D KW - ImageJ KW - framework Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-67851 ER - TY - THES A1 - Duelli, Michael T1 - Heuristic Design and Provisioning of Resilient Multi-Layer Networks T1 - Heuristische Planung und Betrieb von ausfallsicheren Mehrschichtnetzen N2 - To jointly provide different services/technologies, like IP and Ethernet or IP and SDH/SONET, in a single network, equipment of multiple technologies needs to be deployed to the sites/Points of Presence (PoP) and interconnected with each other. Therein, a technology may provide transport functionality to other technologies and increase the number of available resources by using multiplexing techniques. By providing its own switching functionality, each technology creates connections in a logical layer which leads to the notion of multi-layer networks. The design of such networks comprises the deployment and interconnection of components to suit to given traffic demands. To prevent traffic loss due to failures of networking equipment, protection mechanisms need to be established. In multi-layer networks, protection usually can be applied in any of the considered layers. In turn, the hierarchical structure of multi-layer networks also bears shared risk groups (SRG). To achieve a cost-optimal resilient network, an appropriate combination of multiplexing techniques, technologies, and their interconnections needs to be found. Thus, network design is a combinatorial problem with a large parameter and solution space. After the design stage, the resources of a multi-layer network can be provided to traffic demands. Especially, dynamic capacity provisioning requires interaction of sites and layers, as well as accurate retrieval of constraint information. In recent years, generalized multiprotocol label switching (GMPLS) and path computation elements (PCE) have emerged as possible approaches for these challenges. Like the design, the provisioning of multi-layer networks comprises a variety of optimization parameters, like blocking probability, resilience, and energy efficiency. In this work, we introduce several efficient heuristics to approach the considered optimization problems. We perform capital expenditure (CAPEX)-aware design of multi-layer networks from scratch, based on IST NOBEL phase 2 project's cost and equipment data. We comprise traffic and resilience requirements in different and multiple layers as well as different network architectures. On top of the designed networks, we consider the dynamic provisioning of multi-layer traffic based on the GMPLS and PCE architecture. We evaluate different PCE deployments, information retrieval strategies, and re-optimization. Finally, we show how information about provisioning utilization can be used to provide a feedback for network design. N2 - Um in einem Netz verschiedene Dienste/Schichten, z.B. IP und Ethernet oder IP und SDH/SONET, parallel anbieten zu können, müssen Komponenten mehrerer Technologien an den Standorten verbaut und miteinander verbunden werden. Hierbei kann eine Technologie eine Transportschicht für andere Technologien fungieren und die Zahl der verfügbaren Ressourcen durch Multiplex Techniken erhöhen. Durch die Bereitstellung eigener Switching Funktionalität erzeugt jede Technologie Verbindungen in einer logischen Schicht. Dies führt zu der Bezeichnung Mehrschichtnetz (engl. multi-layer network). Die Planung solcher Netze hat das Verbauen und Verbinden von Komponenten zum Ziel, so dass gegebene Verkehrsströme realisiert werden können. Um Unterbrechungen der Verkehrsströme aufgrund von Fehlern in den Netzkomponenten zu verhinden, müssen Schutzmechanismen eingebaut werden. In Mehrschichtnetzen können solche Schutzmechanismen in jeder beliebigen Schicht betrachtet werden. Allerdings birgt die hierarchische Struktur von Mehrschichtnetzen das Risiko von shared risk groups (SRG). Um ein Kosten-optimales ausfallsicheres Netz zu erhalten, muss eine passende Kombination aus Multiplex Techniken, Technologien und deren Verbindungen gefunden werden. Die Netzplanung ist daher ein kombinatorisches Problem mit einem großen Parameter- und Lösungsraum. Nach der Planungsphase können die Ressourcen eines Mehrschichtnetzes für Verkehrsströme vorgehalten werden. Die Betrachtung von dynamischer Kapazitätsanforderungen erfordert die Interaktion von Knoten und Schichten sowie akkurate Gewinnung von Informationen zur Auslastung. In jüngster Zeit sind das Generalized Multiprotocol Label Switching (GMPLS) und das Path Computation Element (PCE) als mögliche Lösungsansätze für diese Herausforderungen entstanden. Wie in der Planung beinhaltet auch der Betrieb von Mehrschichtnetzen eine Vielzahl von Optimierungsparametern, wie die Blockierungswahrscheinlichkeit, Ausfallsicherheit und Energie-Effizienz. In dieser Arbeit, führen wir verschiedene effiziente Heuristiken ein, um die betrachteten Optimierungsprobleme anzugehen. Wir planen Mehrschichtnetze von Grund auf und minimieren hinsichtlich der Anschaffungskosten basierend auf den Kosten- und Komponenten-Daten des IST NOBEL Phase 2 Projekts. Wir berücksichtigen Anforderungen der Verkehrsströme und Ausfallsicherheit in verschiedenen Schichten und in mehreren Schichten gleichzeitig sowie verschiedene Netzarchitekturen. Aufsetzend auf dem geplanten Netz betrachten wir den Betrieb von Mehrschichtnetzen mit dynamischen Verkehr basiert auf der GMPLS und PCE Architektur. Wir bewerten verschiedene PCE Installationen, Strategien zur Informationsgewinnung und Re-Optimierung. Abschließend, zeigen wir wie Information über die Auslastung im Betrieb genutzt werden kann um Rückmeldung an die Netzplanung zu geben. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/12 KW - Mehrschichtsystem KW - Planung KW - Ressourcenmanagement KW - Ausfallsicheres System KW - Mehrschichtnetze KW - Pfadberechnungselement KW - Ausfallsicherheit KW - Multi-Layer KW - Design KW - Resilience KW - Resource Management KW - Path Computation Element Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-69433 ER - TY - THES A1 - Saska, Martin T1 - Trajectory planning and optimal control for formations of autonomous robots T1 - Die Bahnplanung und die optimale Steuerung für Formationen der autonomen Roboter N2 - In this thesis, we present novel approaches for formation driving of nonholonomic robots and optimal trajectory planning to reach a target region. The methods consider a static known map of the environment as well as unknown and dynamic obstacles detected by sensors of the formation. The algorithms are based on leader following techniques, where the formation of car-like robots is maintained in a shape determined by curvilinear coordinates. Beyond this, the general methods of formation driving are specialized and extended for an application of airport snow shoveling. Detailed descriptions of the algorithms complemented by relevant stability and convergence studies will be provided in the following chapters. Furthermore, discussions of the applicability will be verified by various simulations in existing robotic environments and also by a hardware experiment. N2 - In dieser Arbeit präsentieren wir neuartige Algorithmen für die Steuerung der Formationen der nichtholonomen Roboter und ihre optimale Bahnplanung. Die Algorithmen beruhen auf "leader-follower" Techniken. Die Formationen der "car-like" Roboter sind in einer bestimmten Form von "curvilinear" Koordinaten gehalten. Die Steuerungmethoden der Formationen sind spezialisiert und erweitert um ihre Anwendung auf das Flughafenschneeschaufeln. In dieser Arbeit werden die detaillierten Beschreibungen der Algorithmen durch entsprechende Stabilität- und Konvergenz-Studien gestellt. Ihre Anwendbarkeit wird durch verschiedene Simulationen und eine Hardware-Experiment überprüft. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 3 KW - Autonomer Roboter KW - Mobiler Roboter KW - Optimale Kontrolle KW - Formation KW - Steuerung KW - formation driving KW - mobile robots KW - snow shoveling KW - receding horizon control KW - model predictive control KW - trajectory planning Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-53175 SN - 978-3-923959-56-3 ER - TY - THES A1 - Fehler, Manuel T1 - Kalibrierung Agenten-basierter Simulationen T1 - Calibration of Agent-based Simulations N2 - In der vorliegenden Arbeit wird das Problem der Kalibrierung Agenten-basierter Simulationen (ABS) behandelt, also das Problem, die Parameterwerte eines Agenten-basierten Simulationsmodells so einzustellen, dass valides Simulationsverhalten erreicht wird. Das Kalibrierungsproblem für Simulationen an sich ist nicht neu und ist im Rahmen klassischer Simulationsparadigmen, wie z.B. der Makro-Simulation, fester Bestandteil der Forschung. Im Vergleich zu den dort betrachteten Kalibrierungsproblemen zeichnet sich das Kalibrierungsproblem für ABS jedoch durch eine Reihe zusätzlicher Herausforderungen aus, welche die direkte Anwendung existierender Kalibrierungsverfahren in begrenzter Zeit erschweren, bzw. nicht mehr sinnvoll zulassen. Die Lösung dieser Probleme steht im Zentrum dieser Dissertation: Das Ziel besteht darin, den Nutzer bei der Kalibrierung von ABS auf der Basis von unzureichenden, potentiell fehlerhaften Daten und Wissen zu unterstützen. Dabei sollen drei Hauptprobleme gelöst werden: 1)Vereinfachung der Kalibrierung großer Agenten-Parametermengen auf der Mikro- Ebene in Agenten-basierten Simulationen durch Ausnutzung der spezifischen Struktur von ABS (nämlich dem Aufbau aus einer Menge von Agentenmodellen). 2)Kalibrierung Agenten-basierter Simulationen, so dass auf allen relevanten Beobachtungsebenen valides Simulationsverhalten erzeugt wird (mindestens Mikro und Makro-Ebene). Als erschwerende Randbedingung muss die Kalibrierung unter der Voraussetzung einer Makro-Mikro-Wissenslücke durchgeführt werden. 3)Kalibrierung Agenten-basierter Simulationen auf der Mikro-Ebene unter der Voraussetzung, dass zur Kalibrierung einzelner Agentenmodelle nicht ausreichend und potentiell verfälschte Daten zur Verhaltensvalidierung zur Verfügung stehen. Hierzu wird in dieser Arbeit das sogenannte Makro-Mikro-Verfahren zur Kalibrierung von Agenten-basierten Simulationen entwickelt. Das Verfahren besteht aus einem Basisverfahren, das im Verlauf der Arbeit um verschiedene Zusatzverfahren erweitert wird. Das Makro-Mikro-Verfahren und seine Erweiterungen sollen dazu dienen, die Modellkalibrierung trotz stark verrauschter Daten und eingeschränktem Wissen über die Wirkungszusammenhänge im Originalsystem geeignet zu ermöglichen und dabei den Kalibrierungsprozess zu beschleunigen: 1) Makro-Mikro-Kalibrierungsverfahren: Das in dieser Arbeit entwickelte Makro- Mikro-Verfahren unterstützt den Nutzer durch eine kombinierte Kalibrierung auf der Mikro- und der Makro-Beobachtungsebene, die gegebenenfalls durch Zwischenebenen erweitert werden kann. Der Grundgedanke des Verfahrens besteht darin, das Kalibrierungsproblem in eines auf aggregierter Verhaltensebene und eines auf der Ebene des Mikro-Agentenverhaltens aufzuteilen. Auf der Makro-Ebene wird nach validen idealen aggregierten Verhaltensmodellen (IVM) der Agenten gesucht. Auf der Mikro-Ebene wird versucht die individuellen Modelle der Agenten auf Basis des erwünschten Gesamtverhaltens und der ermittelten IVM so zu kalibrieren, das insgesamt Simulationsverhalten entsteht, das sowohl auf Mikro- als auch auf Makro-Ebene valide ist. 2) Erweiterung 1: Robuste Kalibrierung: Um den Umgang mit potentiell verrauschten Validierungskriterien (d.h. mit verrauschten Daten über ein Originalsystem, auf denen die Validierungskriterien der Simulation beruhen) und Modellteilen während der Kalibrierung von ABS zu ermöglichen, wird eine robuste Kalibrierungstechnik zur Anwendung im Makro-Mikro-Verfahren entwickelt. 3) Erweiterung 2: Kalibrierung mit Heterogenitätssuche: Als zweite Erweiterung des Makro-Mikro-Verfahrens wird ein Verfahren entwickelt, das das Problem des unklaren Detaillierungsgrades von ABS auf der Ebene der Parameterwerte adressiert. Prinzipiell kann zwar jeder Agent unterschiedliche Parameterwerte verwenden, obwohl eine geringere Heterogenität zur Erzeugung validen Verhaltens ausreichend wäre. Die entwickelte Erweiterung versucht, während der Kalibrierung, eine geeignete Heterogenitätsausprägung für die Parameterwerte der Agenten zu ermitteln. Unter einer Heterogenitätsausprägung wird dabei eine Einteilung der simulierten Agenten in Gruppen mit jeweils gleichen Parameterwerten verstanden. Die Heterogenitätssuche dient dazu, einen Kompromiss zu finden zwischen der Notwendigkeit, sehr große Parametersuchräume durchsuchen zu müssen und gleichzeitig den Suchraum so klein wie möglich halten zu wollen. N2 - In this doctoral thesis the problem of calibrating agent-based simulations (ABS) is treated, i.e. the problem to adjust the parameter values of an agent-based simulation model to achieve valid simulation behavior. The calibration problem for simulations per se is not new and is an active part of research in the context of traditional simulation paradigms, such as the macro-simulation. Compared to the problems considered there the problems for ABS can be distinguished by several additional challenges that complicate the direct application of existing calibration procedures in a limited time, or challenges that do not allow applying existing procedures at all. The goal of this thesis is to assist the user in the calibration of ABS on the basis of incomplete and potentially noisy data or knowledge and in dealing with large amounts of parameter values if an ABS with many individual agents needs to be calibrated. The thesis covers the following three main topics: 1) Simplification of the calibration of many agent parameter values on the micro-level in ABS. This is done by exploiting the specific structure of ABS (i.e. that an ABS constitutes of a lattice of agent models). 2) Calibration of agent-based simulations, so that valid simulation behavior is created on all relevant behavior observation levels (at least micro- and macro-level). This needs to be possible without having full knowledge about how the macro observation level behavior constitutes from the modeled micro behavior. 3) Calibration of agent-based simulations on the micro-level under the constraint that only partial and potentially noisy data for testing and validation of single individual agent models is available. To achieve this the so-called “Macro-Micro Procedure” for calibrating agent-based simulations is developed. The approach consists of a basic procedure that is extended in the course of the work with various additional techniques: 1)Macro-Micro-Calibration Procedure: The Macro-Micro Procedure supports the user by applying a combined calibration on the micro and the macro-observation level, which can optionally be expanded using additional intermediate levels. The basic idea of the procedure consists of separating the calibration problem into one at the aggregate behavior level and one at the level of the micro-agent behavior. At the macro level, valid simulation behavior for ideal aggregate behavior models (IAM) of agents is being determined. At the micro level, the goal is to calibrate the models of the individual agents based on the desired overall behavior and the determined IAM from the macro level. Upon completion the simulation behavior created shall be valid both at the micro and also at a macro level. 2)Extension 1: Robust Calibration: In order to deal with potentially noisy validation criteria and model parts (i.e. with noisy data about the original system from which the validation criteria of the simulation are created) a robust calibration technique is developed that can be used as part of the Macro-Micro-Procedure. 3)Extension 2: Calibration with heterogeneity search: The second extension of the Macro-Micro-Procedure addresses the problem of an unclear level of detail on the level of the parameter values. Theoretically it is possible to use different parameter values for each individual simulated agent which leads to a huge parameter search space. Often it is however sufficient to use a lower heterogeneity in the parameter values to generate valid behavior which would allow calibration in a smaller search space. The developed extension attempts to determine such a suitable heterogeneity manifestation for the parameter values of the agents as part of the calibration process itself. A heterogeneity manifestation is performed by dividing the agents into groups of agents with homogenous parameter values. The developed heterogeneity search offers a compromise between the necessity of having to search very large parameter search spaces and the goal to keep the search space as small as possible during the calibration. KW - Computersimulation KW - Mehrebenensimulation KW - Autonomer Agent KW - Agenten-basierte Simulation KW - Multiagentensimulation KW - Parameterkalibrierung KW - Hierarchische Simulation KW - Simulation KW - Agent KW - Calibration KW - Optimization KW - Agent-based Simulation KW - Multi-Agent-Simulation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-64762 ER - TY - THES A1 - Sun, Kaipeng T1 - Six Degrees of Freedom Object Pose Estimation with Fusion Data from a Time-of-flight Camera and a Color Camera T1 - 6DOF Posenschätzung durch Datenfusion einer Time-of-Flight-Kamera und einer Farbkamera N2 - Object six Degrees of Freedom (6DOF) pose estimation is a fundamental problem in many practical robotic applications, where the target or an obstacle with a simple or complex shape can move fast in cluttered environments. In this thesis, a 6DOF pose estimation algorithm is developed based on the fused data from a time-of-flight camera and a color camera. The algorithm is divided into two stages, an annealed particle filter based coarse pose estimation stage and a gradient decent based accurate pose optimization stage. In the first stage, each particle is evaluated with sparse representation. In this stage, the large inter-frame motion of the target can be well handled. In the second stage, the range data based conventional Iterative Closest Point is extended by incorporating the target appearance information and used for calculating the accurate pose by refining the coarse estimate from the first stage. For dealing with significant illumination variations during the tracking, spherical harmonic illumination modeling is investigated and integrated into both stages. The robustness and accuracy of the proposed algorithm are demonstrated through experiments on various objects in both indoor and outdoor environments. Moreover, real-time performance can be achieved with graphics processing unit acceleration. N2 - Die 6DOF Posenschätzung von Objekten ist ein fundamentales Problem in vielen praktischen Robotikanwendungen, bei denen sich ein Ziel- oder Hindernisobjekt, einfacher oder komplexer Form, schnell in einer unstrukturierten schwierigen Umgebung bewegt. In dieser Forschungsarbeit wird zur Lösung des Problem ein 6DOF Posenschätzer entwickelt, der auf der Fusion von Daten einer Time-of-Flight-Kamera und einer Farbkamera beruht. Der Algorithmus ist in zwei Phasen unterteilt, ein Annealed Partikel-Filter bestimmt eine grobe Posenschätzung, welche mittels eines Gradientenverfahrens in einer zweiten Phase optimiert wird. In der ersten Phase wird jeder Partikel mittels sparse represenation ausgewertet, auf diese Weise kann eine große Inter-Frame-Bewegung des Zielobjektes gut behandelt werden. In der zweiten Phase wird die genaue Pose des Zielobjektes mittels des konventionellen, auf Entfernungsdaten beruhenden, Iterative Closest Point-Algorithmus aus der groben Schätzung der ersten Stufe berechnet. Der Algorithmus wurde dabei erweitert, so dass auch Informationen über das äußere Erscheinungsbild des Zielobjektes verwendet werden. Zur Kompensation von signifikanten Beleuchtungsschwankungen während des Trackings, wurde eine Modellierung der Ausleuchtung mittels Kugelflächenfunktionen erforscht und in beide Stufen der Posenschätzung integriert. Die Leistungsfähigkeit, Robustheit und Genauigkeit des entwickelten Algorithmus wurde in Experimenten im Innen- und Außenbereich mit verschiedenen Zielobjekten gezeigt. Zudem konnte gezeigt werden, dass die Schätzung mit Hilfe von Grafikprozessoren in Echtzeit möglich ist. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 10 KW - Mustererkennung KW - Maschinelles Sehen KW - Sensor KW - 3D Vision KW - 6DOF Pose Estimation KW - Visual Tracking KW - Pattern Recognition KW - Computer Vision KW - 3D Sensor Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-105089 SN - 978-3-923959-97-6 ER - TY - JOUR A1 - Andronic, Joseph A1 - Shirakashi, Ryo A1 - Pickel, Simone U. A1 - Westerling, Katherine M. A1 - Klein, Teresa A1 - Holm, Thorge A1 - Sauer, Markus A1 - Sukhorukov, Vladimir L. T1 - Hypotonic Activation of the Myo-Inositol Transporter SLC5A3 in HEK293 Cells Probed by Cell Volumetry, Confocal and Super-Resolution Microscopy JF - PLoS One N2 - Swelling-activated pathways for myo-inositol, one of the most abundant organic osmolytes in mammalian cells, have not yet been identified. The present study explores the SLC5A3 protein as a possible transporter of myo-inositol in hyponically swollen HEK293 cells. To address this issue, we examined the relationship between the hypotonicity-induced changes in plasma membrane permeability to myo-inositol Pino [m/s] and expression/localization of SLC5A3. Pino values were determined by cell volumetry over a wide tonicity range (100–275 mOsm) in myo-inositol-substituted solutions. While being negligible under mild hypotonicity (200–275 mOsm), Pino grew rapidly at osmolalities below 200 mOsm to reach a maximum of ∼3 nm/s at 100–125 mOsm, as indicated by fast cell swelling due to myo-inositol influx. The increase in Pino resulted most likely from the hypotonicity-mediated incorporation of cytosolic SLC5A3 into the plasma membrane, as revealed by confocal fluorescence microscopy of cells expressing EGFP-tagged SLC5A3 and super-resolution imaging of immunostained SLC5A3 by direct stochastic optical reconstruction microscopy (dSTORM). dSTORM in hypotonic cells revealed a surface density of membrane-associated SLC5A3 proteins of 200–2000 localizations/μm2. Assuming SLC5A3 to be the major path for myo-inositol, a turnover rate of 80–800 myo-inositol molecules per second for a single transporter protein was estimated from combined volumetric and dSTORM data. Hypotonic stress also caused a significant upregulation of SLC5A3 gene expression as detected by semiquantitative RT-PCR and Western blot analysis. In summary, our data provide first evidence for swelling-mediated activation of SLC5A3 thus suggesting a functional role of this transporter in hypotonic volume regulation of mammalian cells. KW - electrolytes KW - isotonic KW - membrane proteins KW - cell membranes KW - hypotonic KW - hypotonic solutions KW - tonicity KW - permeability Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-126408 VL - 10 IS - 3 ER - TY - JOUR A1 - Weiß, Clemens Leonard A1 - Schultz, Jörg T1 - Identification of divergent WH2 motifs by HMM-HMM alignments JF - BMC Research Notes N2 - Background The actin cytoskeleton is a hallmark of eukaryotic cells. Its regulation as well as its interaction with other proteins is carefully orchestrated by actin interaction domains. One of the key players is the WH2 motif, which enables binding to actin monomers and filaments and is involved in the regulation of actin nucleation. Contrasting conserved domains, the identification of this motif in protein sequences is challenging, as it is short and poorly conserved. Findings To identify divergent members, we combined Hidden-Markov-Model (HMM) to HMM alignments with orthology predictions. Thereby, we identified nearly 500 proteins containing so far not annotated WH2 motifs. This included shootin-1, an actin binding protein involved in neuron polarization. Among others, WH2 motifs of ‘proximal to raf’ (ptr)-orthologs, which are described in the literature, but not annotated in genome databases, were identified. Conclusion In summary, we increased the number of WH2 motif containing proteins substantially. This identification of candidate regions for actin interaction could steer their experimental characterization. Furthermore, the approach outlined here can easily be adapted to the identification of divergent members of further domain families. KW - WH2 domain KW - spire KW - shootin-1 KW - actin nucleation KW - HHblits Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-126413 VL - 8 IS - 18 ER - TY - THES A1 - Kindermann, Philipp T1 - Angular Schematization in Graph Drawing N2 - Graphs are a frequently used tool to model relationships among entities. A graph is a binary relation between objects, that is, it consists of a set of objects (vertices) and a set of pairs of objects (edges). Networks are common examples of modeling data as a graph. For example, relationships between persons in a social network, or network links between computers in a telecommunication network can be represented by a graph. The clearest way to illustrate the modeled data is to visualize the graphs. The field of Graph Drawing deals with the problem of finding algorithms to automatically generate graph visualizations. The task is to find a "good" drawing, which can be measured by different criteria such as number of crossings between edges or the used area. In this thesis, we study Angular Schematization in Graph Drawing. By this, we mean drawings with large angles (for example, between the edges at common vertices or at crossing points). The thesis consists of three parts. First, we deal with the placement of boxes. Boxes are axis-parallel rectangles that can, for example, contain text. They can be placed on a map to label important sites, or can be used to describe semantic relationships between words in a word network. In the second part of the thesis, we consider graph drawings visually guide the viewer. These drawings generally induce large angles between edges that meet at a vertex. Furthermore, the edges are drawn crossing-free and in a way that makes them easy to follow for the human eye. The third and final part is devoted to crossings with large angles. In drawings with crossings, it is important to have large angles between edges at their crossing point, preferably right angles. N2 - Graphen sind häufig verwendete Werkzeuge zur Modellierung von Zusammenhängen zwischen Daten. Ein Graph ist eine binäre Relation zwischen Objekten, das heißt er besteht aus einer Menge von Objekten (Knoten) und einer Menge von Paaren von Objekten (Kanten). Netzwerke sind übliche Beispiele für das Modellieren von Daten als ein Graph. Beispielsweise lassen sich Beziehungen zwischen Personen in einem sozialen Netzwerk oder Netzanbindungen zwischen Computern in einem Telekommunikationsnetz als Graph darstellen. Die modellierten Daten können am anschaulichsten dargestellt werden, indem man die Graphen visualisiert. Der Bereich des Graphenzeichnens behandelt das Problem, Algorithmen zum automatischen Erzeugen von Graphenvisualisierungen zu finden. Das Ziel ist es, eine "gute" Zeichnung zu finden, was durch unterschiedliche Kriterien gemessen werden kann; zum Beispiel durch die Anzahl der Kreuzungen zwischen Kanten oder durch den Platzverbrauch. In dieser Arbeit beschäftigen wir uns mit Winkelschematisierung im Graphenzeichnen. Darunter verstehen wir Zeichnungen, in denen die Winkel (zum Beispiel zwischen Kanten an einem gemeinsamen Knoten oder einem Kreuzungspunkt) möglichst groß gestaltet sind. Die Arbeit besteht aus drei Teilen. Im ersten Teil betrachten wir die Platzierung von Boxen. Boxen sind achsenparallele Rechtecke, die zum Beispiel Text enthalten. Sie können beispielsweise auf einer Karte platziert werden, um wichtige Standorte zu beschriften, oder benutzt werden, um semantische Beziehungen zwischen Wörtern in einem Wortnetzwerk darzustellen. Im zweiten Teil der Arbeit untersuchen wir Graphenzeichnungen, die den Betrachter visuell führen. Im Allgemeinen haben diese Zeichnungen große Winkel zwischen Kanten, die sich in einem Knoten treffen. Außerdem werden die Verbindungen kreuzungsfrei und so gezeichnet, dass es dem menschlichen Auge leicht fällt, ihnen zu folgen. Im dritten und letzten Teil geht es um Kreuzungen mit großen Winkeln. In Zeichnungen mit Kreuzungen ist es wichtig, dass die Winkel zwischen Kanten an Kreuzungspunkten groß sind, vorzugsweise rechtwinklig. KW - graph drawing KW - angular schematization KW - boundary labeling KW - contact representation KW - word clouds KW - monotone drawing KW - smooth orthogonal drawing KW - simultaneous embedding KW - right angle crossing KW - independent crossing KW - Graphenzeichnen KW - Winkel KW - Kreuzung KW - v Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-112549 SN - 978-3-95826-020-7 (print) SN - 978-3-95826-021-4 (online) PB - Würzburg University Press CY - Würzburg ER - TY - RPRT A1 - Kounev, Samuel A1 - Brosig, Fabian A1 - Huber, Nikolaus T1 - The Descartes Modeling Language N2 - This technical report introduces the Descartes Modeling Language (DML), a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. DML is designed to serve as a basis for self-aware resource management during operation ensuring that system QoS requirements are continuously satisfied while infrastructure resources are utilized as efficiently as possible. KW - Ressourcenmanagement KW - Software Engineering KW - Resource and Performance Management KW - Software Performance Engineering KW - Software Performance Modeling KW - Performance Management KW - Quality-of-Service Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-104887 ER - TY - JOUR A1 - Becker, Martin A1 - Caminiti, Saverio A1 - Fiorella, Donato A1 - Francis, Louise A1 - Gravino, Pietro A1 - Haklay, Mordechai (Muki) A1 - Hotho, Andreas A1 - Loreto, Virrorio A1 - Mueller, Juergen A1 - Ricchiuti, Ferdinando A1 - Servedio, Vito D. P. A1 - Sirbu, Alina A1 - Tria, Franesca T1 - Awareness and Learning in Participatory Noise Sensing JF - PLOS ONE N2 - The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments. KW - exposure Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-127675 SN - 1932-6203 VL - 8 IS - 12 ER - TY - JOUR A1 - Zirkel, J. A1 - Cecil, A. A1 - Schäfer, F. A1 - Rahlfs, S. A1 - Ouedraogo, A. A1 - Xiao, K. A1 - Sawadogo, S. A1 - Coulibaly, B. A1 - Becker, K. A1 - Dandekar, T. T1 - Analyzing Thiol-Dependent Redox Networks in the Presence of Methylene Blue and Other Antimalarial Agents with RT-PCR-Supported in silico Modeling JF - Bioinformatics and Biology Insights N2 - BACKGROUND: In the face of growing resistance in malaria parasites to drugs, pharmacological combination therapies are important. There is accumulating evidence that methylene blue (MB) is an effective drug against malaria. Here we explore the biological effects of both MB alone and in combination therapy using modeling and experimental data. RESULTS: We built a model of the central metabolic pathways in P. falciparum. Metabolic flux modes and their changes under MB were calculated by integrating experimental data (RT-PCR data on mRNAs for redox enzymes) as constraints and results from the YANA software package for metabolic pathway calculations. Several different lines of MB attack on Plasmodium redox defense were identified by analysis of the network effects. Next, chloroquine resistance based on pfmdr/and pfcrt transporters, as well as pyrimethamine/sulfadoxine resistance (by mutations in DHF/DHPS), were modeled in silico. Further modeling shows that MB has a favorable synergism on antimalarial network effects with these commonly used antimalarial drugs. CONCLUSIONS: Theoretical and experimental results support that methylene blue should, because of its resistance-breaking potential, be further tested as a key component in drug combination therapy efforts in holoendemic areas. KW - methylene blue KW - malaria KW - elementary mode analysis KW - drug KW - resistance KW - combination therapy KW - pathway KW - metabolic flux Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-123751 N1 - This is an open access article. Unrestricted non-commercial use is permitted provided the original work is properly cited. VL - 6 ER - TY - JOUR A1 - Krueger, Beate A1 - Friedrich, Torben A1 - Förster, Frank A1 - Bernhardt, Jörg A1 - Gross, Roy A1 - Dandekar, Thomas T1 - Different evolutionary modifications as a guide to rewire two-component systems JF - Bioinformatics and Biology Insights N2 - Two-component systems (TCS) are short signalling pathways generally occurring in prokaryotes. They frequently regulate prokaryotic stimulus responses and thus are also of interest for engineering in biotechnology and synthetic biology. The aim of this study is to better understand and describe rewiring of TCS while investigating different evolutionary scenarios. Based on large-scale screens of TCS in different organisms, this study gives detailed data, concrete alignments, and structure analysis on three general modification scenarios, where TCS were rewired for new responses and functions: (i) exchanges in the sequence within single TCS domains, (ii) exchange of whole TCS domains; (iii) addition of new components modulating TCS function. As a result, the replacement of stimulus and promotor cassettes to rewire TCS is well defined exploiting the alignments given here. The diverged TCS examples are non-trivial and the design is challenging. Designed connector proteins may also be useful to modify TCS in selected cases. KW - histidine kinase KW - connector KW - Mycoplasma KW - engineering KW - promoter KW - sensor KW - response regulator KW - synthetic biology KW - sequence alignment Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-123647 N1 - This is an open access article. Unrestricted non-commercial use is permitted provided the original work is properly cited. VL - 6 ER - TY - JOUR A1 - Merget, Benjamin A1 - Koetschan, Christian A1 - Hackl, Thomas A1 - Förster, Frank A1 - Dandekar, Thomas A1 - Müller, Tobias A1 - Schultz, Jörg A1 - Wolf, Matthias T1 - The ITS2 Database JF - Journal of Visual Expression N2 - The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation. The ITS2 Database presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank accurately reannotated. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE and ProfDistS for multiple sequence-structure alignment calculation and Neighbor Joining tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses. KW - homology modeling KW - molecular systematics KW - internal transcribed spacer 2 KW - alignment KW - genetics KW - secondary structure KW - ribosomal RNA KW - phylogenetic tree KW - phylogeny Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-124600 VL - 61 IS - e3806 ER - TY - JOUR A1 - Buchin, Kevin A1 - Buchin, Maike A1 - Byrka, Jaroslaw A1 - Nöllenburg, Martin A1 - Okamoto, Yoshio A1 - Silveira, Rodrigo I. A1 - Wolff, Alexander T1 - Drawing (Complete) Binary Tanglegrams JF - Algorithmica N2 - A binary tanglegram is a drawing of a pair of rooted binary trees whose leaf sets are in one-to-one correspondence; matching leaves are connected by inter-tree edges. For applications, for example, in phylogenetics, it is essential that both trees are drawn without edge crossings and that the inter-tree edges have as few crossings as possible. It is known that finding a tanglegram with the minimum number of crossings is NP-hard and that the problem is fixed-parameter tractable with respect to that number. We prove that under the Unique Games Conjecture there is no constant-factor approximation for binary trees. We show that the problem is NP-hard even if both trees are complete binary trees. For this case we give an O(n 3)-time 2-approximation and a new, simple fixed-parameter algorithm. We show that the maximization version of the dual problem for binary trees can be reduced to a version of MaxCut for which the algorithm of Goemans and Williamson yields a 0.878-approximation. KW - NP-hardness KW - crossing minimization KW - binary tanglegram KW - approximation algorithm KW - fixed-parameter tractability Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-124622 VL - 62 ER - TY - GEN T1 - Jahresbericht 2014 T1 - Annual Report 2014 N2 - Jahresbericht 2014 des Rechenzentrums der Universität Würzburg N2 - Annual Report 2014 of the Computer Center, University of Wuerzburg T3 - Jahresbericht des Rechenzentrums der Universität Würzburg - 2014 KW - Rechenzentrum Universität Würzburg KW - annual report KW - Computer Center University of Wuerzburg KW - Jahresbericht Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-124432 UR - https://www.rz.uni-wuerzburg.de/infos/publikationen/ ER - TY - JOUR A1 - Toepfer, Martin A1 - Corovic, Hamo A1 - Fette, Georg A1 - Klügl, Peter A1 - Störk, Stefan A1 - Puppe, Frank T1 - Fine-grained information extraction from German transthoracic echocardiography reports JF - BMC Medical Informatics and Decision Making N2 - Background Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. Methods A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. Results The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. Conclusions The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research. Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-125509 VL - 15 IS - 91 ER - TY - THES A1 - Hopfner, Marbod T1 - Source Code Analysis, Management, and Visualization for PROLOG T1 - Quelltextanalyse, Verwaltung und Visualisierung für Prolog N2 - This thesis deals with the management and analysis of source code, which is represented in XML. Using the elementary methods of the XML repository, the XML source code representation is accessed, changed, updated, and saved. We reason about the source code, refactor source code and we visualize dependency graphs for call analysis. The visualized dependencies between files, modules, or packages are used to structure the source code in order to get a system, which is easily to comprehend, to modify and to complete. Sophisticated methods have been developed to slice the source code in order to obtain a working package of a large system, containing only a specific functionality. The basic methods, on which the visualizations and analyses are built on can be changed like changing a plug-in. The visualization methods can be reused in order to handle arbitrary source code representations, e.g., JAML, PHPML, PROLOGML. Dependencies of other context can be visualized, too, e.g., ER diagrams, or website references. The tool SCAV supports source code visualization and analyzing methods. N2 - Diese Dissertation beschäftigt sich mit der Verwaltung und Analyse von Quellcode, der in XML repräsentiert ist. Es werden Abhängigkeitsgraphen visualisiert um ein Projekt leichter verstehen zu können. Es kann auch ein Slice einer bestimmten Methode aus dem Projekt erstellt werden. Die Programmierung ist in Modulen gemacht, so dass die Funktionalität leicht erweitert werden kann. KW - Refactoring KW - Software Engineering KW - Refactoring KW - Call Graph KW - Dependency Graph KW - Abhängigskeitsgraph KW - Software Engineering KW - Source Code Visualization KW - Refactoring KW - Call Graph KW - Dependency Graph KW - Abhängigskeitsgraph KW - Software Engineering KW - Source Code Visualization Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-36300 ER - TY - THES A1 - Binder, Andreas T1 - Die stochastische Wissenschaft und zwei Teilsysteme eines Web-basierten Informations- und Anwendungssystems zu ihrer Etablierung T1 - The stochastic science and two subsystems of a web-based information and application system for its establishment N2 - Das stochastische Denken, die Bernoullische Stochastik und dessen informationstechnologische Umsetzung, namens Stochastikon stellen die Grundlage für das Verständnis und die erfolgreiche Nutzung einer stochastischen Wissenschaft dar. Im Rahmen dieser Arbeit erfolgt eine Klärung des Begriffs des stochastischen Denkens, eine anschauliche Darstellung der von Elart von Collani entwickelten Bernoullischen Stochastik und eine Beschreibung von Stochastikon. Dabei werden sowohl das Gesamtkonzept von Stochastikon, sowie die Ziele, Aufgaben und die Realisierung der beiden Teilsysteme namens Mentor und Encyclopedia vorgestellt. Das stochastische Denken erlaubt eine realitätsnahe Sichtweise der Dinge, d.h. eine Sichtweise, die mit den menschlichen Beobachtungen und Erfahrungen im Einklang steht und somit die Unsicherheit über zukünftige Entwicklungen berücksichtigt. Der in diesem Kontext verwendete Begriff der Unsicherheit bezieht sich ausschließlich auf zukünftige Entwicklungen und äußert sich in Variabilität. Quellen der Unsicherheit sind einerseits die menschliche Ignoranz und andererseits der Zufall. Unter Ignoranz wird hierbei die Unwissenheit des Menschen über die unbekannten, aber feststehenden Fakten verstanden, die die Anfangsbedingungen der zukünftigen Entwicklung repräsentieren. Die Bernoullische Stochastik liefert ein Regelwerk und ermöglicht die Entwicklung eines quantitativen Modells zur Beschreibung der Unsicherheit und expliziter Einbeziehung der beiden Quellen Ignoranz und Zufall. Das Modell trägt den Namen Bernoulli-Raum und bildet die Grundlage für die Herleitung quantitativer Verfahren, um zuverlässige und genaue Aussagen sowohl über die nicht-existente zufällige Zukunft (Vorhersageverfahren), als auch über die unbekannte feststehende Vergangenheit (Messverfahren). Das Softwaresystem Stochastikon implementiert die Bernoullische Stochastik in Form einer Reihe autarker, miteinander kommunizierender Teilsysteme. Ziel des Teilsystems Encyclopedia ist die Bereitstellung und Bewertung stochastischen Wissens. Das Teilsystem Mentor dient der Unterstützung des Anwenders bei der Problemlösungsfindung durch Identifikation eines richtigen Modells bzw. eines korrekten Bernoulli-Raums. Der Lösungsfindungsprozess selber enthält keinerlei Unsicherheit. Die ganze Unsicherheit steckt in der Lösung, d.h. im Bernoulli-Raum, der explizit die vorhandene Unwissenheit (Ignoranz) und den vorliegenden Zufall abdeckend enthält. N2 - Stochastic thinking, Bernoulli stochastics and its information technological realization, called Stochastikon, represent the basis for understanding and successfully utilizing stochastic science. This thesis defines the concept of stochastic thinking, introduces Bernoulli stochastics, which has been developed by Elart von Collani, and describes the IT system Stochastikon. The concept and the design of Stochastikon are outlined and the aims, tasks and realizations of the two subsystems Mentor and Encyclopedia are given in detail. Stochastic thinking enables a realistic view of reality. This means a view, which is in agreement with observation and experience and, thus, takes into account uncertainty about future developments. In this context the term of uncertainty is used exclusively with respect to future development and materializes in variability. Sources of uncertainty are on the one hand human ignorance about fixed facts on the one hand and randomness on the other. Bernoulli stochastics makes available a set of rules for developing a quantitative model about uncertainty taking particularly into account the two sources ignorance and randomness. The model is called Bernoulli-Space, which is the basis for reliable and precise quantitative procedures for statements about the random future (prediction procedures) as well as about the unknown fixed past (measurement procedures). The software system, called Stochastikon, implements Bernoulli stochastics based on a set of self-sustained intercommunicating subsystems. The Subsystem Encyclopedia makes stochastical knowledge available, while the Subsystem Mentor supports the user for solving (stochastic) problems by identifying the correct model respectively correct Bernoulli-Space. The problem solving process is free of uncertainty, because all uncertainty is modelled by Bernoulli-space. KW - Stochastik KW - stochastisches Denken KW - Bernoullische Stochastik KW - Bernoulli-Raum KW - Stochastikon KW - stochastic thinking KW - Bernoulli stochastics KW - Bernoullispace KW - Stochastikon Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-26146 ER - TY - THES A1 - Binzenhöfer, Andreas T1 - Performance Analysis of Structured Overlay Networks T1 - Leistungsbewertung Strukturierter Overlay Netze N2 - Overlay networks establish logical connections between users on top of the physical network. While randomly connected overlay networks provide only a best effort service, a new generation of structured overlay systems based on Distributed Hash Tables (DHTs) was proposed by the research community. However, there is still a lack of understanding the performance of such DHTs. Additionally, those architectures are highly distributed and therefore appear as a black box to the operator. Yet an operator does not want to lose control over his system and needs to be able to continuously observe and examine its current state at runtime. This work addresses both problems and shows how the solutions can be combined into a more self-organizing overlay concept. At first, we evaluate the performance of structured overlay networks under different aspects and thereby illuminate in how far such architectures are able to support carrier-grade applications. Secondly, to enable operators to monitor and understand their deployed system in more detail, we introduce both active as well as passive methods to gather information about the current state of the overlay network. N2 - Unter einem Overlay Netz versteht man den Zusammenschluss mehrerer Komponenten zu einer logischen Topologie, die auf einer existierenden physikalischen Infrastruktur aufsetzt. Da zufällige Verbindungen zwischen den einzelnen Teilnehmern aber sehr ineffizient sind, wurden strukturierte Overlay Netze entworfen, bei denen die Beziehungen zwischen den einzelnen Teilnehmern fest vorgeschrieben sind. Solche strukturierten Mechanismen versprechen zwar ein großes Potential, dieses wurde aber noch nicht ausreichend untersucht bzw. wissenschaftlich nachgewiesen. In dieser Arbeit wird mit mathematischen Methoden und ereignisorientierter Simulation die Leistungsfähigkeit von strukturierten Overlay Netzen untersucht. Da diese stark von der aktuellen Situation im Overlay abhängt, entwickeln wir Methoden, mit denen sich sowohl passiv, als auch aktiv, wichtige Systemparameter zur Laufzeit abschätzen bzw. messen lassen. Zusammen führen die vorgeschlagenen Methoden zu selbstorganisierenden Mechanismen, die den aktuellen Zustand des Overlays überwachen, diesen bewerten und sich gegebenenfalls automatisch an die aktuellen Verhältnisse anpassen T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/08 KW - Overlay-Netz KW - Leistungsbewertung KW - Peer-to-Peer-Netz KW - Mathematisches Modell KW - Chord KW - Kademlia KW - DHT KW - Overlay KW - Chord KW - Kademlia KW - DHT Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-26291 ER -