TY - THES A1 - Schmidt, Marco T1 - Ground Station Networks for Efficient Operation of Distributed Small Satellite Systems T1 - Effizienter Betrieb von Verteilten Kleinsatelliten-Systemen mit Bodenstationsnetzwerken N2 - The field of small satellite formations and constellations attracted growing attention, based on recent advances in small satellite engineering. The utilization of distributed space systems allows the realization of innovative applications and will enable improved temporal and spatial resolution in observation scenarios. On the other side, this new paradigm imposes a variety of research challenges. In this monograph new networking concepts for space missions are presented, using networks of ground stations. The developed approaches combine ground station resources in a coordinated way to achieve more robust and efficient communication links. Within this thesis, the following topics were elaborated to improve the performance in distributed space missions: Appropriate scheduling of contact windows in a distributed ground system is a necessary process to avoid low utilization of ground stations. The theoretical basis for the novel concept of redundant scheduling was elaborated in detail. Additionally to the presented algorithm was a scheduling system implemented, its performance was tested extensively with real world scheduling problems. In the scope of data management, a system was developed which autonomously synchronizes data frames in ground station networks and uses this information to detect and correct transmission errors. The system was validated with hardware in the loop experiments, demonstrating the benefits of the developed approach. N2 - Satellitenformationen und Konstellationen rücken immer mehr in den Fokus aktueller Forschung, ausgelöst durch die jüngsten Fortschritte in der Kleinsatelliten-Entwicklung. Der Einsatz von verteilten Weltraumsystemen ermöglicht die Realisierung von innovativen Anwendungen auf Basis von hoher zeitlicher und räumlicher Auflösung in Observationsszenarien. Allerdings bringt dieses neue Paradigma der Raumfahrttechnik auch Herausforderungen in verschiedenen Forschungsfeldern mit sich. In dieser Dissertation werden neue Netzwerk-Konzepte für Raumfahrtmissionen unter Einsatz von Bodenstationnetzwerken vorgestellt. Die präsentierten Verfahren koordinieren verfügbare Bodenstationsressourcen um einen robusten und effizienten Kommunikationslink zu ermöglichen. In dieser Arbeit werden dabei folgende Themenfelder behandelt um die Performance in verteilten Raumfahrtmissionen zu steigern: Das Verteilen von Kontaktfenster (sogenanntes Scheduling) in verteilten Bodenstationssystem ist ein notwendiger Prozess um eine niedrige Auslastung der Stationen zu vermeiden. Die theoretische Grundlage für das Konzept des redundanten Scheduling wurde erarbeitet. Zusätztlich wurde das Verfahren in Form eines Scheduling Systems implementiert und dessen Performance ausführlich an real-world Szenarien getestet. Im Rahmen des Themenfeldes Data Management wurde ein System entwickelt, welches autonom Datenframes in Bodenstationsnetzwerken synchronisieren kann. Die in den Datenframes enthaltene Information wird genutzt um Übertragungsfehler zu erkennen und zu korrigieren. Das System wurde mit Hardware-in-the-loop Experimenten validiert und die Vorteile des entwickelten Verfahrens wurden gezeigt. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 6 KW - Kleinsatellit KW - Bodenstation KW - Verteiltes System KW - Scheduling KW - Ground Station Networks KW - Small Satellites KW - Distributed Space Systems Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-64999 SN - 978-3-923959-77-8 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Rule-based Statistical Classifier for Determining a Base Text and Ranking Witnesses In Textual Documents Collation Process N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a histogram. KW - Textvergleich KW - Text Mining KW - Gothenburg Modell KW - Bayes-Klassifikator KW - Textual document collation KW - Base text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-57465 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Understanding, Retention, and Dissemination of Religious Texts Knowledge with Modeling, and Visualization Techniques: The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning. KW - Wissensmanagement KW - Koran KW - Knowledge Modeling KW - Meta-model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55927 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of Architectures for Interactive Textual Documents Collation Systems N2 - One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced. KW - Softwarearchitektur KW - Textvergleich KW - service based software architecture KW - service brokerage KW - interactive collation of textual variants KW - Gothenburg model of collation process Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-56601 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Analysis and Understanding of Quran Search Results with Interactive Scatter Plots and Tables N2 - The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side. KW - Text Mining KW - Visualisierung KW - Koran KW - Information Visualization KW - Visual Text Mining KW - Scatter Plot KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55840 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran N2 - Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents. KW - Text Mining KW - Maschinelles Lernen KW - text categorization KW - Bayesian classifier KW - distance-based classifier KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54712 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Using Machine Learning Algorithms for Categorizing Quranic Chaptersby Major Phases of Prophet Mohammad’s Messengership N2 - This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad’s messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and naïve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad’s life. The major phases of the Prophet’s life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad’s messengership. KW - Koran KW - Maschinelles Lernen KW - Text categorization KW - Clustering KW - Support Vector Machine KW - Naïve Bayesian KW - Place of revelation KW - Stages of Prophet Mohammad’s messengership KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66862 ER - TY - THES A1 - Schlosser, Daniel T1 - Quality of Experience Management in Virtual Future Networks T1 - Netzwerkmanagement unter Berücksichtigung der vom Benutzer erfahrenen Dienstgüte in virtuellen zukünftigen Netzen N2 - Aktuell beobachten wir eine drastische Vervielfältigung der Dienste und Anwendungen, die das Internet für den Datentransport nutzen. Dabei unterscheiden sich die Anforderungen dieser Dienste an das Netzwerk deutlich. Das Netzwerkmanagement wird durch diese Diversität der nutzenden Dienste aber deutlich erschwert, da es einem Datentransportdienstleister kaum möglich ist, die unterschiedlichen Verbindungen zu unterscheiden, ohne den Inhalt der transportierten Daten zu analysieren. Netzwerkvirtualisierung ist eine vielversprechende Lösung für dieses Problem, da sie es ermöglicht für verschiedene Dienste unterschiedliche virtuelle Netze auf dem gleichen physikalischen Substrat zu betreiben. Diese Diensttrennung ermöglicht es, jedes einzelne Netz anwendungsspezifisch zu steuern. Ziel einer solchen Netzsteuerung ist es, sowohl die vom Nutzer erfahrene Dienstgüte als auch die Kosteneffizienz des Datentransports zu optimieren. Darüber hinaus wird es mit Netzwerkvirtualisierung möglich das physikalische Netz so weit zu abstrahieren, dass die aktuell fest verzahnten Rollen von Netzwerkbesitzer und Netzwerkbetreiber entkoppelt werden können. Darüber hinaus stellt Netzwerkvirtualisierung sicher, dass unterschiedliche Datennetze, die gleichzeitig auf dem gleichen physikalischen Netz betrieben werden, sich gegenseitig weder beeinflussen noch stören können. Diese Arbeit  beschäftigt sich mit ausgewählten Aspekten dieses Themenkomplexes und fokussiert sich darauf, ein virtuelles Netzwerk mit bestmöglicher Dienstqualität für den Nutzer zu betreiben und zu steuern. Dafür wird ein Top-down-Ansatz gewählt, der von den Anwendungsfällen, einer möglichen Netzwerkvirtualisierungs-Architektur und aktuellen Möglichkeiten der Hardwarevirtualisierung ausgeht. Im Weiteren fokussiert sich die Arbeit dann in Richtung Bestimmung und Optimierung der vom Nutzer erfahrenen Dienstqualität (QoE) auf Applikationsschicht und diskutiert Möglichkeiten zur Messung und Überwachung von wesentlichen Netzparametern in virtualisierten Netzen. N2 - Currently, we observe a strong growth of services and applications, which use the Internet for data transport. However, the network requirements of these applications differ significantly. This makes network management difficult, since it complicated to separate network flows into application classes without inspecting application layer data. Network virtualization is a promising solution to this problem. It enables running different virtual network on the same physical substrate. Separating networks based on the service supported within allows controlling each network according to the specific needs of the application. The aim of such a network control is to optimize the user perceived quality as well as the cost efficiency of the data transport. Furthermore, network virtualization abstracts the network functionality from the underlying implementation and facilitates the split of the currently tightly integrated roles of Internet Service Provider and network owner. Additionally, network virtualization guarantees that different virtual networks run on the same physical substrate do not interfere with each other. This thesis discusses different aspects of the network virtualization topic. It is focused on how to manage and control a virtual network to guarantee the best Quality of Experience for the user. Therefore, a top-down approach is chosen. Starting with use cases of virtual networks, a possible architecture is derived and current implementation options based on hardware virtualization are explored. In the following, this thesis focuses on assessing the Quality of Experience perceived by the user and how it can be optimized on application layer. Furthermore, options for measuring and monitoring significant network parameters of virtual networks are considered. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 01/12 KW - Netzwerkmanagement KW - Dienstgüte KW - Netzwerkvirtualisierung KW - QoS KW - QoE KW - Network Virtualization KW - Quality of Experience KW - Network Management KW - Quality of Service Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-69986 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computing Generic Causes of Revelation of the Quranic Verses Using Machine Learning Techniques N2 - Because many verses of the holy Quran are similar, there is high probability that, similar verses addressing same issues share same generic causes of revelation. In this study, machine learning techniques have been employed in order to automatically derive causes of revelation of Quranic verses. The derivation of the causes of revelation is viewed as a classification problem. Initially the categories are based on the verses with known causes of revelation, and the testing set consists of the remaining verses. Based on a computed threshold value, a naïve Bayesian classifier is used to categorize some verses. After that, using a decision tree classifier the remaining uncategorized verses are separated into verses that contain indicators (resultative connectors, causative expressions…), and those that do not. As for those verses having indicators, each one is segmented into its constituent clauses by identification of the linking indicators. Then a dominant clause is extracted and considered either as the cause of revelation, or post-processed by adding or subtracting some terms to form a causal clause that constitutes the cause of revelation. Concerning remaining unclassified verses without indicators, a naive Bayesian classifier is again used to assign each one of them to one of the existing classes based on features and topics similarity. As for verses that could not be classified so far, manual classification was made by considering each verse as a category on its own. The result obtained in this study is encouraging, and shows that automatic derivation of Quranic verses’ generic causes of revelation is achievable, and reasonably reliable for understanding and implementing the teachings of the Quran. KW - Text Mining KW - Koran KW - Text mining KW - Statistical classifiers KW - Text segmentation KW - Causes of revelation KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66083 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of a Model-driven XML-based Integrated System Architecture for Assisting Analysis, Understanding, and Retention of Religious Texts:The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. This paper discusses about modeling religious texts using semantic XML markup based on frame-based knowledge representation, with the purpose of assisting understanding, retention, and sharing of knowledge they contain. In this study, books organized in terms of chapters made up of verses are considered as the source of knowledge to model. Some metadata representing the multiple perspectives of knowledge modeling are assigned to each chapter and verse. Chapters and verses with their metadata form a meta-model, which is represented using frames, and published on a web mashup. An XML-based annotation and visualization system equipped with user interfaces for creating static and dynamic metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing generated knowledge on the Internet, has been developed. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts, in order to support analysis, understanding, and retention of the texts. KW - Wissensrepräsentation KW - Wissensmanagement KW - Content Management KW - XML KW - Koran KW - Knowledge representation KW - Meta-model KW - Frames KW - XML model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65737 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computer-based Textual Documents Collation System for Reconstructing the Original Text from Automatically Identified Base Text and Ranked Witnesses N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a bar diagram. Additionally this study includes a recursive algorithm for automatically reconstructing the original text from the identified base text and ranked witnesses. KW - Textvergleich KW - Text Mining KW - Textual document collation KW - Base text KW - Reconstruction of original text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65749 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Philosophical and Computational Approaches for Estimating and Visualizing Months of Revelations of Quranic Chapters N2 - The question of why the Quran structure does not follow its chronology of revelation is a recurring one. Some Islamic scholars such as [1] have answered the question using hadiths, as well as other philosophical reasons based on internal evidences of the Quran itself. Unfortunately till today many are still wondering about this issue. Muslims believe that the Quran is a summary and a copy of the content of a preserved tablet called Lawhul-Mahfuz located in the heaven. Logically speaking, this suggests that the arrangement of the verses and chapters is expected to be similar to that of the Lawhul-Mahfuz. As for the arrangement of the verses in each chapter, there is unanimity that it was carried out by the Prophet himself under the guidance of Angel Gabriel with the recommendation of God. But concerning the ordering of the chapters, there are reports about some divergences [3] among the Prophet’s companions as to which chapter should precede which one. This paper argues that Quranic chapters might have been arranged according to months and seasons of revelation. In fact, based on some verses of the Quran, it is defendable that the Lawhul-Mahfuz itself is understood to have been structured in terms of the months of the year. In this study, philosophical and mathematical arguments for computing chapters’ months of revelation are discussed, and the result is displayed on an interactive scatter plot. KW - Text Mining KW - Visualisierung KW - Koran KW - Text mining KW - Visualization KW - Chronology of revelation KW - Chapters arrangement KW - Quran KW - Lawhul-Mahfuz Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65784 ER - TY - THES A1 - Selbach, Stefan T1 - Hybride bitparallele Volltextsuche T1 - Hybrid Bit-parallel Full-text Search N2 - Der große Vorteil eines q-Gramm Indexes liegt darin, dass es möglich ist beliebige Zeichenketten in einer Dokumentensammlung zu suchen. Ein Nachteil jedoch liegt darin, dass bei größer werdenden Datenmengen dieser Index dazu neigt, sehr groß zu werden, was mit einem deutlichem Leistungsabfall verbunden ist. In dieser Arbeit wird eine neuartige Technik vorgestellt, die die Leistung eines q-Gramm Indexes mithilfe zusätzlicher M-Matrizen für jedes q-Gramm und durch die Kombination mit einem invertierten Index erhöht. Eine M-Matrix ist eine Bit-Matrix, die Informationen über die Positionen eines q-Gramms enthält. Auch bei der Kombination von zwei oder mehreren Q-Grammen bieten diese M-Matrizen Informationen über die Positionen der Kombination. Dies kann verwendet werden, um die Komplexität der Zusammenführung der q-Gramm Trefferlisten für eine gegebene Suchanfrage zu reduzieren und verbessert die Leistung des n-Gramm-invertierten Index. Die Kombination mit einem termbasierten invertierten Index beschleunigt die durchschnittliche Suchzeit zusätzlich und vereint die Vorteile beider Index-Formate. Redundante Informationen werden in dem q-Gramm Index reduziert und weitere Funktionalität hinzugefügt, wie z.B. die Bewertung von Treffern nach Relevanz, die Möglichkeit, nach Konzepten zu suchen oder Indexpartitionierungen nach Wichtigkeit der enthaltenen Terme zu erstellen. N2 - The major advantage of the n-gram inverted index is the possibility to locate any given substring in a document collection. Nevertheless, the n-gram inverted index also has its drawbacks: If the collections are getting bigger, this index tends to be very large and the performance drops significantly. A novel technique is proposed to enhance the performance of an n-gram inverted index by using additional m-matrixes for each n-gram and by combining it with an inverted index. An m-matrix is a bit matrix containing information about the positions of an n-gram. When combining two or more n-grams, these m-matrixes provide information about the positions of the combination. This can be used to reduce the complexity of merging the n-gram postings lists for a given search and improves the performance of the n-gram inverted index. The combination with a term based inverted index speeds up the average search time even more and combines the benefits of both index formats. Redundant information is reduced in the n-gram index and further functionality is added like the ranking of hits, the possibility to search for concepts and to create index partitions according to the relevance of the contained terms. KW - Information Retrieval KW - Information-Retrieval-System KW - Suchverfahren KW - Invertierte Liste KW - n-Gramm KW - q-Gramm KW - Volltextsuche KW - Bit Parallelität KW - Konzeptsuche KW - q-gram KW - n-gram KW - bit-parallel KW - full-text search KW - concept search Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66476 ER - TY - JOUR A1 - Mandel, Alexander A1 - Hörnlein, Alexander A1 - Ifland, Marianus A1 - Lüneburg, Edeltraud A1 - Deckert, Jürgen A1 - Puppe, Frank T1 - Aufwandsanalyse für computerunterstützte Multiple-Choice Papierklausuren T1 - Cost analysis for computer supported multiple-choice paper examinations JF - GMS Journal for Medical Education N2 - Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam." N2 - Einleitung: Multiple-Choice-Klausuren spielen immer noch eine herausragende Rolle für fakultätsinterne medizinische Prüfungen. Neben inhaltlichen Arbeiten stellt sich die Frage, wie die technische Abwicklung optimiert werden kann. Für Dozenten in der Medizin gibt es zunehmend drei Optionen zur Durchführung von MC-Klausuren: Papierklausuren mit oder ohne Computerunterstützung oder vollständig elektronische Klausuren. Kritische Faktoren sind der Aufwand für die Formatierung der Klausur, der logistische Aufwand bei der Klausurdurchführung, die Qualität, Schnelligkeit und der Aufwand der Klausurkorrektur, die Bereitstellung der Dokumente für die Einsichtnahme, und die statistische Analyse der Klausurergebnisse. Methoden: An der Universität Würzburg wird seit drei Semestern ein Computerprogramm zur Eingabe und Formatierung der MC-Fragen in medizinischen und anderen Papierklausuren verwendet und optimiert, mit dem im Wintersemester (WS) 2009/2010 elf, im Sommersemester (SS) 2010 zwölf und im WS 2010/11 dreizehn medizinische Klausuren erstellt und anschließend die eingescannten Antwortblätter automatisch ausgewertet wurden. In den letzten beiden Semestern wurden die Aufwände protokolliert. Ergebnisse: Der Aufwand der Formatierung und der Auswertung einschl. nachträglicher Anpassung der Auswertung einer Durchschnittsklausur mit ca. 140 Teilnehmern und ca. 35 Fragen ist von 5-7 Stunden für Klausuren ohne Komplikation im WS 2009/2010 über ca. 2 Stunden im SS 2010 auf ca. 1,5 Stunden im WS 2010/11 gefallen. Einschließlich der Klausuren mit Komplikationen bei der Auswertung betrug die durchschnittliche Zeit im SS 2010 ca. 3 Stunden und im WS 10/11 ca. 2,67 Stunden pro Klausur. Diskussion: Für konventionelle Multiple-Choice-Klausuren bietet die computergestützte Formatierung und Auswertung von Papierklausuren einen beträchtlichen Zeitvorteil für die Dozenten im Vergleich zur manuellen Korrektur von Papierklausuren und benötigt im Vergleich zu rein elektronischen Klausuren eine deutlich einfachere technische Infrastruktur und weniger Personal bei der Klausurdurchführung. KW - Multiple-Choice Prüfungen KW - Automatisierte Prüfungskorrektur KW - Aufwandsanalyse KW - Educational Measurement (I2.399) KW - Self-Evaluation Programs (I2.399.780) KW - Multiple-Choice Examination KW - Cost Analysis Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-134386 VL - 28 IS - 4 ER - TY - JOUR A1 - Zeeshan, Ahmed T1 - Towards Performance Measurement and Metrics Based Analysis of PLA Applications N2 - This article is about a measurement analysis based approach to help software practitioners in managing the additional level complexities and variabilities in software product line applications. The architecture of the proposed approach i.e. ZAC is designed and implemented to perform preprocessesed source code analysis, calculate traditional and product line metrics and visualize results in two and three dimensional diagrams. Experiments using real time data sets are performed which concluded with the results that the ZAC can be very helpful for the software practitioners in understanding the overall structure and complexity of product line applications. Moreover the obtained results prove strong positive correlation between calculated traditional and product line measures. KW - Programmierbare logische Anordnung KW - Analysis KW - Measurement KW - Software product lines KW - Variability Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-68188 ER - TY - THES A1 - Oechsner, Simon T1 - Performance Challenges and Optimization Potential of Peer-to-Peer Overlay Technologies T1 - Leistungsanforderungen und Optimierungspotential von Peer-to-Peer Overlay-Technologien N2 - In today's Internet, building overlay structures to provide a service is becoming more and more common. This approach allows for the utilization of client resources, thus being more scalable than a client-server model in this respect. However, in these architectures the quality of the provided service depends on the clients and is therefore more complex to manage. Resource utilization, both at the clients themselves and in the underlying network, determine the efficiency of the overlay application. Here, a trade-off exists between the resource providers and the end users that can be tuned via overlay mechanisms. Thus, resource management and traffic management is always quality-of-service management as well. In this monograph, the three currently significant and most widely used overlay types in the Internet are considered. These overlays are implemented in popular applications which only recently have gained importance. Thus, these overlay networks still face real-world technical challenges which are of high practical relevance. We identify the specific issues for each of the considered overlays, and show how their optimization affects the trade-offs between resource efficiency and service quality. Thus, we supply new insights and system knowledge that is not provided by previous work. N2 - Im heutigen Internet werden immer häufiger Overlay-Strukturen aufgebaut, um eine Dienstleistung zu erbringen. Dieser Ansatz ermöglicht die Nutzung von Client-Ressourcen, so dass er in dieser Hinsicht besser skaliert als das Client-Server-Modell. Die Qualität des zur Verfügung gestellten Dienstes hängt nun aber von den Clients ab und ist daher komplizierter zu steuern. Die Ressourcennutzung, sowohl auf den Clients selbst als auch in dem zugrunde liegenden Netzwerk, bestimmt die Effizienz der Overlay-Anwendung. Hier existiert ein Trade-off zwischen Ressourcen-Anbietern und Endkunden, der über Overlay-Mechanismen geregelt werden kann. Daher ist Ressourcenmanagement und Traffic-Management gleichzeitig immer auch Quality-of-Service-Management. In dieser Arbeit werden die drei derzeit am weitesten im Internet verbreiteten und signifikanten Overlay-Typen berücksichtigt. Diese Overlays sind in populären Anwendungen, die erst vor kurzem an Bedeutung gewonnen haben, implementiert. Daher sind diese Overlay-Netze nach wie vor realen technischen Herausforderungen ausgesetzt, die von hoher praktischer Relevanz sind. Die spezifischen Herausforderungen für jedes der betrachteten Overlays werden identifiziert und es wird gezeigt, wie deren Optimierung den Trade-off zwischen Ressourceneffizienz und Service-Qualität beeinflusst. So werden neue Einsichten und Erkenntnisse über diese Systeme gewonnen, die in früheren Arbeiten nicht existieren. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 02/10 KW - Overlay-Netz KW - Peer-to-Peer-Netz KW - Leistungsbewertung KW - Overlays KW - Peer-to-Peer KW - Performance Evaluation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50015 ER - TY - THES A1 - Henjes, Robert T1 - Performance Evaluation of Publish/Subscribe Middleware Architectures T1 - Leistungsuntersuchung von Publish/Subscribe Middleware Architekturen N2 - While developing modern applications, it is necessary to ensure an efficient and performant communication between different applications. In current environments, a middleware software is used, which supports the publish/subscribe communication pattern. Using this communication pattern, a publisher sends information encapsulated in messages to the middleware. A subscriber registers its interests at the middleware. The monograph describes three different steps to determine the performance of such a system. In a first step, the message throughput performance of a publish/subscribe in different scenarios is measured using a Java Message Service (JMS) based implementation. In the second step the maximum achievable message throughput is described by adapted models depending on the filter complexity and the replication grade. Using the model, the performance characteristics of a specific system in a given scenario can be determined. These numbers are used for the queuing model described in the third part of the thesis, which supports the dimensioning of a system in realistic scenarios. Additionally, we introduce a method to approximate an M/G/1 system numerically in an efficient way, which can be used for real time analysis to predict the expected performance in a certain scenario. Finally, the analytical model is used to investigate different possibilities to ensure the scalability of the maximum achievable message throughput of the overall system. N2 - Bei der Entwicklung moderner Applikationen ist es notwendig eine effiziente und performante Kommunikation zwischen den einzelnen Anwendungen sicherzustellen. In der Praxis kommt dabei eine Middleware Software zum Einsatz, die das Publish/Subscribe Kommunikationsmuster unterstützt. Dabei senden Publisher Informationen in Form von Nachrichten an die Middleware. Die Subscriber hingegen zeigen durch die Nutzung von Filtern der Middleware an, welche Informationen zugestellt werden sollen. Die Arbeit beschreibt ein dreistufiges Verfahren zur Leistungsbestimmung eines solchen Systems. Zunächst wird durch Messung die Leistung von Publish/Subscribe Systemen in verschiedenen Szenarien untersucht am Beispiel von Java Message Service (JMS) basierten Implementierungen. Danach wird der maximale Nachrichtendurchsatz in Abhängigkeit der Filterkomplexität und des Nachrichtenreplikationsgrades durch einfache Modelle beschrieben. Damit können die Leistungskennwerte für ein System und vorgegebenen Randbedingungen beschrieben werden. Im dritten Teil wird mittels Leistungsbewertung und durch Anwendung eines Warteschlangenmodells die Leistung in praxisnahen Umfeld beschrieben, so dass eine Dimensionierung möglich wird. Zusätzlich wird ein mathematisch, approximatives Verfahren vorgestellt, um ein M/G/1 System numerisch effizient berechnen zu können, was bei der Echtzeitbewertung eines Systems zur Leistungsvorhersage benutzt werden kann. Des Weiteren werden mittels des Modells Möglichkeiten untersucht die Skalierbarkeit des Gesamtsystems in Bezug auf den Nachrichtendurchsatz sicherzustellen. T3 - Würzburger Beiträge zur Leistungsbewertung Verteilter Systeme - 04/10 KW - Middleware KW - Publish-Subscribe-System KW - Java Message Service KW - Leistungsbewertung KW - Middleware KW - Publish-Subscribe-System KW - Java Message Service KW - Performance Evaluation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-53388 ER - TY - CHAP ED - Kolla, Reiner T1 - 9. Fachgespräch Sensornetze der GI/ITG Fachgruppe Kommunikation und Verteilte Systeme N2 - Jährliches Fachgespräch zu Sensornetzen der GI/ITG Fachgruppe Kommunikation und Verteilte Systeme, 16. - 17. September 2010, Universität Würzburg KW - Drahtloses Sensorsystem KW - Fachgespräch KW - Aufsatzsammlung KW - sensor network KW - wireless network Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51106 ER - TY - THES A1 - Zeiger, Florian T1 - Internet Protocol based networking of mobile robots T1 - Internet Protokoll basierte Vernetzung von mobilen Robotern N2 - This work is composed of three main parts: remote control of mobile systems via Internet, ad-hoc networks of mobile robots, and remote control of mobile robots via 3G telecommunication technologies. The first part gives a detailed state of the art and a discussion of the problems to be solved in order to teleoperate mobile robots via the Internet. The focus of the application to be realized is set on a distributed tele-laboratory with remote experiments on mobile robots which can be accessed world-wide via the Internet. Therefore, analyses of the communication link are used in order to realize a robust system. The developed and implemented architecture of this distributed tele-laboratory allows for a smooth access also with a variable or low link quality. The second part covers the application of ad-hoc networks for mobile robots. The networking of mobile robots via mobile ad-hoc networks is a very promising approach to realize integrated telematic systems without relying on preexisting communication infrastructure. Relevant civilian application scenarios are for example in the area of search and rescue operations where first responders are supported by multi-robot systems. Here, mobile robots, humans, and also existing stationary sensors can be connected very fast and efficient. Therefore, this work investigates and analyses the performance of different ad-hoc routing protocols for IEEE 802.11 based wireless networks in relevant scenarios. The analysis of the different protocols allows for an optimization of the parameter settings in order to use these ad-hoc routing protocols for mobile robot teleoperation. Also guidelines for the realization of such telematics systems are given. Also traffic shaping mechanisms of application layer are presented which allow for a more efficient use of the communication link. An additional application scenario, the integration of a small size helicopter into an IP based ad-hoc network, is presented. The teleoperation of mobile robots via 3G telecommunication technologies is addressed in the third part of this work. The high availability, high mobility, and the high bandwidth provide a very interesting opportunity to realize scenarios for the teleoperation of mobile robots or industrial remote maintenance. This work analyses important parameters of the UMTS communication link and investigates also the characteristics for different data streams. These analyses are used to give guidelines which are necessary for the realization of or industrial remote maintenance or mobile robot teleoperation scenarios. All the results and guidelines for the design of telematic systems in this work were derived from analyses and experiments with real hardware. N2 - Diese Arbeit gliedert sich in drei Hauptteile: Fernsteuerung mobiler Systeme über das Internet, ad-hoc Netzwerke mobiler Roboter und Fernsteuerung mobiler Roboter über Mobilfunktechnologien der 3. Generation. Im ersten Teil werden ein ausführlicher Stand der Technik und eine Diskussion der bei der Fernsteuerung mobiler Roboter über das Internet zu lösenden Probleme gegeben. Der Fokus der zu realisierenden Anwendung in diesem Teil der Arbeit liegt auf einem verteilten Tele-Labor mit Experimenten zu mobilen Robotern, welche über das Internet weltweit zugänglich sind. Hierzu werden Link-Analysen der zugrundeliegenden Kommunikationsinfrastruktur zu Hilfe genommen, um ein robustes System zu realisieren. Die entwickelte und implementierte Architektur des verteilten Tele-Labors erlaubt einen reibungslosen Zugang auch für Verbindungen mit variabler oder schlechter Linkqualität. Im zweiten Teil werden ad-hoc Netzwerke mobiler Roboter behandelt. Die Vernetzung mobiler Roboter über mobile ad-hoc Netzwerke ist eine vielversprechende Möglichkeit um integrierte Telematiksysteme zu realisieren ohne auf zuvor existierende Infrastruktur angewiesen zu sein. Relevante Einsatzszenarien im zivilen Bereich sind zum Beispiel Such- und Rettungsszenarien, in denen die Rettungskräfte vor Ort durch vernetzte Multi-Roboter Systeme unterstütz werden. Hier werden dann mobile Roboter, Menschen und gegebenenfalls auch vorhandene stationäre Sensoren schnell und effizient vernetzt. In dieser Arbeit werden dazu verschieden ad-hoc Routing-Protokolle für IEEE 802.11 basierte Drahtlosnetzwerke in relevanten Szenarien untersucht und deren Leistungsfähigkeit verglichen. Die Analyse der verschiedenen Protokolle erlaubt eine Optimierung der Parametereinstellung, um diese ad-hoc Routing-Protokolle zur Fernsteuerung mobiler Roboter nutzbar zu machen. Weiterhin werden Richtlinien zur Realisierung solcher Telematiksysteme erarbeitet und Mechanismen zur Verkehrsformung auf Applikationsebene präsentiert, die eine effizientere Nutzung der vorhandenen Kommunikationskanäle erlauben. Als weiteres Anwendungsbeispiel ist die Integration eines ferngesteuerten Kleinhubschraubers in ein IP basiertes ad-hoc Netz beschrieben. Der dritte Teil der Arbeit beschäftigt sich mit der Fernsteuerung mobiler Roboter über Mobilfunktechnologien der 3. Generation. Die hohe Verfügbarkeit der UMTS-Technologie mit der verbundenen Mobilität und der gleichzeitigen hohen Bandbreite bietet hier eine interessante Möglichkeit um die Fernsteuerung mobiler Roboter oder auch interaktive Fernwartungsszenarien zu realisieren. In der vorliegenden Arbeit werden wichtige Parameter der UMTS Verbindung analysiert und auch die Charakteristiken der Verbindung für verschiedene Verkehrsströme ermittelt. Diese dienen dann zur Erstellung von Richtlinien, die zur Umsetzung der interaktive Fernwartungsszenarien oder auch der Fernsteuerung mobiler Roboter nötig sind. Die in dieser Arbeit erstellten Richtlinien zum Entwurf von Telematiksystemen wurden aus Analysen und Experimenten mit realer Hardware abgeleitet. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 4 KW - Robotik KW - Mobiler Roboter KW - Fernsteuerung KW - vernetzte Roboter KW - Telematik KW - Fernsteuerung KW - Robotik KW - Internet Protokoll KW - networked robotics KW - telematics KW - remote control KW - robotics KW - internet protocol Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54776 SN - 978-3-923959-59-4 ER - TY - THES A1 - Sauer, Markus T1 - Mixed-Reality for Enhanced Robot Teleoperation T1 - Mixed-Reality zur verbesserten Fernbedienung von Robotern N2 - In den letzten Jahren ist die Forschung in der Robotik soweit fortgeschritten, dass die Mensch-Maschine Schnittstelle zunehmend die kritischste Komponente für eine hohe Gesamtperformanz von Systemen zur Navigation und Koordination von Robotern wird. In dieser Dissertation wird untersucht wie Mixed-Reality Technologien für Nutzerschnittstellen genutzt werden können, um diese Gesamtperformanz zu erhöhen. Hierzu werden Konzepte und Technologien entwickelt, die durch Evaluierung mit Nutzertest ein optimiertes und anwenderbezogenes Design von Mixed-Reality Nutzerschnittstellen ermöglichen. Er werden somit sowohl die technische Anforderungen als auch die menschlichen Faktoren für ein konsistentes Systemdesign berücksichtigt. Nach einer detaillierten Problemanalyse und der Erstellung eines Systemmodels, das den Menschen als Schlüsselkomponente mit einbezieht, wird zunächst die Anwendung der neuartigen 3D-Time-of-Flight Kamera zur Navigation von Robotern, aber auch für den Einsatz in Mixed-Reality Schnittstellen analysiert und optimiert. Weiterhin wird gezeigt, wie sich der Netzwerkverkehr des Videostroms als wichtigstes Informationselement der meisten Nutzerschnittstellen für die Navigationsaufgabe auf der Netzwerk Applikationsebene in typischen Multi-Roboter Netzwerken mit dynamischen Topologien und Lastsituation optimieren lässt. Hierdurch ist es möglich in sonst in sonst typischen Ausfallszenarien den Videostrom zu erhalten und die Bildrate zu stabilisieren. Diese fortgeschrittenen Technologien werden dann auch dem entwickelten Konzept der generischen 3D Mixed Reality Schnittselle eingesetzt. Dieses Konzept ermöglicht eine integrierte 3D Darstellung der verfügbaren Information, so dass räumliche Beziehungen von Informationen aufrechterhalten werden und somit die Anzahl der mentalen Transformationen beim menschlichen Bediener reduziert wird. Gleichzeitig werden durch diesen Ansatz auch immersive Stereo Anzeigetechnologien unterstützt, welche zusätzlich das räumliche Verständnis der entfernten Situation fördern. Die in der Dissertation vorgestellten und evaluierten Ansätze nutzen auch die Tatsache, dass sich eine lokale Autonomie von Robotern heute sehr robust realisieren lässt. Dies wird zum Beispiel zur Realisierung eines Assistenzsystems mit variabler Autonomie eingesetzt. Hierbei erhält der Fernbediener über eine Kraftrückkopplung kombiniert mit einer integrierten Augmented Reality Schnittstelle, einen Eindruck über die Situation am entfernten Arbeitsbereich, aber auch über die aktuelle Navigationsintention des Roboters. Die durchgeführten Nutzertests belegen die signifikante Steigerung der Navigationsperformanz durch den entwickelten Ansatz. Die robuste lokale Autonomie ermöglicht auch den in der Dissertation eingeführten Ansatz der prädiktiven Mixed-Reality Schnittstelle. Die durch diesen Ansatz entkoppelte Regelschleife über den Menschen ermöglicht es die Sichtbarkeit von unvermeidbaren Systemverzögerungen signifikant zu reduzieren. Zusätzlich können durch diesen Ansatz beide für die Navigation hilfreichen Blickwinkel in einer 3D-Nutzerschnittstelle kombiniert werden – der exozentrische Blickwinkel und der egozentrische Blickwinkel als Augmented Reality Sicht. N2 - With the progress in robotics research the human machine interfaces reach more and more the status of being the major limiting factor for the overall system performance of a system for remote navigation and coordination of robots. In this monograph it is elaborated how mixed reality technologies can be applied for the user interfaces in order to increase the overall system performance. Concepts, technologies, and frameworks are developed and evaluated in user studies which enable for novel user-centered approaches to the design of mixed-reality user interfaces for remote robot operation. Both the technological requirements and the human factors are considered to achieve a consistent system design. Novel technologies like 3D time-of-flight cameras are investigated for the application in the navigation tasks and for the application in the developed concept of a generic mixed reality user interface. In addition it is shown how the network traffic of a video stream can be shaped on application layer in order to reach a stable frame rate in dynamic networks. The elaborated generic mixed reality framework enables an integrated 3D graphical user interface. The realized spatial integration and visualization of available information reduces the demand for mental transformations for the human operator and supports the use of immersive stereo devices. The developed concepts make also use of the fact that local robust autonomy components can be realized and thus can be incorporated as assistance systems for the human operators. A sliding autonomy concept is introduced combining force and visual augmented reality feedback. The force feedback component allows rendering the robot's current navigation intention to the human operator, such that a real sliding autonomy with seamless transitions is achieved. The user-studies prove the significant increase in navigation performance by application of this concept. The generic mixed reality user interface together with robust local autonomy enables a further extension of the teleoperation system to a short-term predictive mixed reality user interface. With the presented concept of operation, it is possible to significantly reduce the visibility of system delays for the human operator. In addition, both advantageous characteristics of a 3D graphical user interface for robot teleoperation- an exocentric view and an augmented reality view – can be combined. T3 - Forschungsberichte in der Robotik = Research Notes in Robotics - 5 KW - Mobiler Roboter KW - Autonomer Roboter KW - Mensch-Maschine-Schnittstelle KW - Mixed Reality KW - Mensch-Roboter-Interaktion KW - Teleoperation KW - Benutzerschnittstelle KW - Robotik KW - Mensch-Maschine-System KW - Human-Robot-Interaction KW - Teleoperation KW - User Interface Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55083 SN - 978-3-923959-67-9 ER -