004 Datenverarbeitung; Informatik
Refine
Has Fulltext
- yes (75)
Is part of the Bibliography
- yes (75)
Year of publication
Document Type
- Doctoral Thesis (75) (remove)
Keywords
- Leistungsbewertung (13)
- Robotik (8)
- Mobiler Roboter (7)
- Autonomer Roboter (6)
- Komplexitätstheorie (5)
- Optimierung (5)
- Simulation (5)
- Theoretische Informatik (5)
- Modellierung (4)
- Netzwerk (4)
Institute
- Institut für Informatik (75) (remove)
Schriftenreihe
Sonstige beteiligte Institutionen
Globale Selbstlokalisation autonomer mobiler Roboter - Ein Schlüsselproblem der Service-Robotik
(2003)
Die Dissertation behandelt die Problemstellung der globalen Selbstlokalisation autonomer mobiler Roboter, welche folgendermaßen beschrieben werden kann: Ein mobiler Roboter, eingesetzt in einem Gebäude, kann unter Umständen das Wissen über seinen Standort verlieren. Man geht nun davon aus, dass dem Roboter eine Gebäudekarte als Modell zur Verfügung steht. Mit Hilfe eines Laser-Entfernungsmessers kann das mobile Gerät neue Informationen aufnehmen und damit bei korrekter Zuordnung zur Modellkarte geeignete hypothetische Standorte ermitteln. In der Regel werden diese Positionen aber mehrdeutig sein. Indem sich der Roboter intelligent in seiner Einsatzumgebung bewegt, kann er die ursprünglichen Sensordaten verifizieren und ermittelt im besten Fall seine tatsächliche Position.Für diese Problemstellung wird ein neuer Lösungsansatz in Theorie und Praxis präsentiert, welcher die jeweils aktuelle lokale Karte und damit alle Sensordaten mittels feature-basierter Matchingverfahren auf das Modell der Umgebung abbildet. Ein Explorationsalgorithmus bewegt den Roboter während der Bewegungsphase autonom zu Sensorpunkten, welche neue Informationen bereitstellen. Während der Bewegungsphase werden dabei die bisherigen hypothetischen Positionen bestärkt oder geschwächt, sodaß nach kurzer Zeit eine dominante Position, die tatsächliche Roboterposition,übrigbleibt.
In the last 40 years, complexity theory has grown to a rich and powerful field in theoretical computer science. The main task of complexity theory is the classification of problems with respect to their consumption of resources (e.g., running time or required memory). To study the computational complexity (i.e., consumption of resources) of problems, similar problems are grouped into so called complexity classes. During the systematic study of numerous problems of practical relevance, no efficient algorithm for a great number of studied problems was found. Moreover, it was unclear whether such algorithms exist. A major breakthrough in this situation was the introduction of the complexity classes P and NP and the identification of hardest problems in NP. These hardest problems of NP are nowadays known as NP-complete problems. One prominent example of an NP-complete problem is the satisfiability problem of propositional formulas (SAT). Here we get a propositional formula as an input and it must be decided whether an assignment for the propositional variables exists, such that this assignment satisfies the given formula. The intensive study of NP led to numerous related classes, e.g., the classes of the polynomial-time hierarchy PH, P, #P, PP, NL, L and #L. During the study of these classes, problems related to propositional formulas were often identified to be complete problems for these classes. Hence some questions arise: Why is SAT so hard to solve? Are there modifications of SAT which are complete for other well-known complexity classes? In the context of these questions a result by E. Post is extremely useful. He identified and characterized all classes of Boolean functions being closed under superposition. It is possible to study problems which are connected to generalized propositional logic by using this result, which was done in this thesis. Hence, many different problems connected to propositional logic were studied and classified with respect to their computational complexity, clearing the borderline between easy and hard problems.
Starfree regular languages can be build up from alphabet letters by using only Boolean operations and concatenation. The complexity of these languages can be measured with the so-called dot-depth. This measure leads to concatenation hierarchies like the dot-depth hierarchy (DDH) and the closely related Straubing-Thérien hierarchy (STH). The question whether the single levels of these hierarchies are decidable is still open and is known as the dot-depth problem. In this thesis we prove/reprove the decidability of some lower levels of both hierarchies. More precisely, we characterize these levels in terms of patterns in finite automata (subgraphs in the transition graph) that are not allowed. Therefore, such characterizations are called forbidden-pattern characterizations. The main results of the thesis are as follows: forbidden-pattern characterization for level 3/2 of the DDH (this implies the decidability of this level) decidability of the Boolean hierarchy over level 1/2 of the DDH definition of decidable hierarchies having close relations to the DDH and STH Moreover, we prove/reprove the decidability of the levels 1/2 and 3/2 of both hierarchies in terms of forbidden-pattern characterizations. We show the decidability of the Boolean hierarchies over level 1/2 of the DDH and over level 1/2 of the STH. A technique which uses word extensions plays the central role in the proofs of these results. With this technique it is possible to treat the levels 1/2 and 3/2 of both hierarchies in a uniform way. Furthermore, it can be used to prove the decidability of the mentioned Boolean hierarchies. Among other things we provide a combinatorial tool that allows to partition words of arbitrary length into factors of bounded length such that every second factor u leads to a loop with label u in a given finite automaton.
Historical maps are fascinating documents and a valuable source of information for scientists of various disciplines. Many of these maps are available as scanned bitmap images, but in order to make them searchable in useful ways, a structured representation of the contained information is desirable.
This book deals with the extraction of spatial information from historical maps. This cannot be expected to be solved fully automatically (since it involves difficult semantics), but is also too tedious to be done manually at scale.
The methodology used in this book combines the strengths of both computers and humans: it describes efficient algorithms to largely automate information extraction tasks and pairs these algorithms with smart user interactions to handle what is not understood by the algorithm. The effectiveness of this approach is shown for various kinds of spatial documents from the 16th to the early 20th century.
Making machines understand natural language is a dream of mankind that existed
since a very long time. Early attempts at programming machines to converse with
humans in a supposedly intelligent way with humans relied on phrase lists and simple
keyword matching. However, such approaches cannot provide semantically adequate
answers, as they do not consider the specific meaning of the conversation. Thus, if we
want to enable machines to actually understand language, we need to be able to access
semantically relevant background knowledge. For this, it is possible to query so-called
ontologies, which are large networks containing knowledge about real-world entities
and their semantic relations. However, creating such ontologies is a tedious task, as often
extensive expert knowledge is required. Thus, we need to find ways to automatically
construct and update ontologies that fit human intuition of semantics and semantic
relations. More specifically, we need to determine semantic entities and find relations
between them. While this is usually done on large corpora of unstructured text, previous
work has shown that we can at least facilitate the first issue of extracting entities by
considering special data such as tagging data or human navigational paths. Here, we do
not need to detect the actual semantic entities, as they are already provided because of
the way those data are collected. Thus we can mainly focus on the problem of assessing
the degree of semantic relatedness between tags or web pages. However, there exist
several issues which need to be overcome, if we want to approximate human intuition of
semantic relatedness. For this, it is necessary to represent words and concepts in a way
that allows easy and highly precise semantic characterization. This also largely depends
on the quality of data from which these representations are constructed.
In this thesis, we extract semantic information from both tagging data created by users
of social tagging systems and human navigation data in different semantic-driven social
web systems. Our main goal is to construct high quality and robust vector representations
of words which can the be used to measure the relatedness of semantic concepts.
First, we show that navigation in the social media systems Wikipedia and BibSonomy is
driven by a semantic component. After this, we discuss and extend methods to model
the semantic information in tagging data as low-dimensional vectors. Furthermore, we
show that tagging pragmatics influences different facets of tagging semantics. We then
investigate the usefulness of human navigational paths in several different settings on
Wikipedia and BibSonomy for measuring semantic relatedness. Finally, we propose
a metric-learning based algorithm in adapt pre-trained word embeddings to datasets
containing human judgment of semantic relatedness.
This work contributes to the field of studying semantic relatedness between words
by proposing methods to extract semantic relatedness from web navigation, learn highquality
and low-dimensional word representations from tagging data, and to learn
semantic relatedness from any kind of vector representation by exploiting human
feedback. Applications first and foremest lie in ontology learning for the Semantic Web,
but also semantic search or query expansion.
Gegenstand der Arbeit stellt eine erstmalig unternommene, architekturübergreifende Studie über feldprogrammierbare Logikbausteine zur Implementierung synchroner Schaltkreise dar. Zunächst wird ein Modell für allgemeine feldprogrammiebare Architekturen basierend auf periodischen Graphen definiert. Schließlich werden Bewertungsmaße für Architekturen und Schaltkreislayouts angegeben zur Charakterisierung struktureller Eigenschaften hinsichtlich des Verhaltens in Chipflächenverbrauch und Signalverzögerung. Ferner wird ein generisches Layout-Werkzeug entwickelt, das für beliebige Architekturen und Schaltkreise Implementierungen berechnen und bewerten kann. Abschließend werden neun ressourcenminimalistische Architekturen mit Maschen- und mit Inselstruktur einander gegenübergestellt.
This work is subdivided into two main areas: resilient admission control and resilient routing. The work gives an overview of the state of the art of quality of service mechanisms in communication networks and proposes a categorization of admission control (AC) methods. These approaches are investigated regarding performance, more precisely, regarding the potential resource utilization by dimensioning the capacity for a network with a given topology, traffic matrix, and a required flow blocking probability. In case of a failure, the affected traffic is rerouted over backup paths which increases the traffic rate on the respective links. To guarantee the effectiveness of admission control also in failure scenarios, the increased traffic rate must be taken into account for capacity dimensioning and leads to resilient AC. Capacity dimensioning is not feasible for existing networks with already given link capacities. For the application of resilient NAC in this case, the size of distributed AC budgets must be adapted according to the traffic matrix in such a way that the maximum blocking probability for all flows is minimized and that the capacity of all links is not exceeded by the admissible traffic rate in any failure scenario. Several algorithms for the solution of that problem are presented and compared regarding their efficiency and fairness. A prototype for resilient AC was implemented in the laboratories of Siemens AG in Munich within the scope of the project KING. Resilience requires additional capacity on the backup paths for failure scenarios. The amount of this backup capacity depends on the routing and can be minimized by routing optimization. New protection switching mechanisms are presented that deviate the traffic quickly around outage locations. They are simple and can be implemented, e.g, by MPLS technology. The Self-Protecting Multi-Path (SPM) is a multi-path consisting of disjoint partial paths. The traffic is distributed over all faultless partial paths according to an optimized load balancing function both in the working case and in failure scenarios. Performance studies show that the network topology and the traffic matrix also influence the amount of required backup capacity significantly. The example of the COST-239 network illustrates that conventional shortest path routing may need 50% more capacity than the optimized SPM if all single link and node failures are protected.
In der Arbeit wird ein neues Konzept für Fahrsimulator-Datenbasen vorgestellt. Der Anwender entwirft eine auf seine Fragestellung zugeschnittene Datenbasis mithilfe einer einfachen Skriptsprache. Das Straßennetzwerk wird auf einer topologischen Ebene repäsentiert. In jedem Simulationsschritt wird hieraus im Sichtbarkeitsbereich des Fahrers die geometrische Repäsentation berechnet. Die für den Fahrer unsichtbaren Teile des Straßenetzwerks können während der Simulation verändert werden. Diese Veränderungen können von der Route des Fahrers oder von den in der Simulation erhobenen Messerten abhängen. Zudem kann der Anwender das Straßennetzwerk interaktiv verändern. Das vorgestellte Konzept bietet zahlreiche Möglichkeiten zur Erzeugung reproduzierbarer Szenarien für Experimente in Fahrsimulatoren.
3D point clouds are a de facto standard for 3D documentation and modelling. The advances in laser scanning technology broadens the usability and access to 3D measurement systems. 3D point clouds are used in many disciplines such as robotics, 3D modelling, archeology and surveying. Scanners are able to acquire up to a million of points per second to represent the environment with a dense point cloud. This represents the captured environment with a very high degree of detail. The combination of laser scanning technology with photography adds color information to the point clouds. Thus the environment is represented more realistically. Full 3D models of environments, without any occlusion, require multiple scans. Merging point clouds is a challenging process. This thesis presents methods for point cloud registration based on the panorama images generated from the scans. Image representation of point clouds introduces 2D image processing methods to 3D point clouds. Several projection methods for the generation of panorama maps of point clouds are presented in this thesis. Additionally, methods for point cloud reduction and compression based on the panorama maps are proposed. Due to the large amounts of data generated from the 3D measurement systems these methods are necessary to improve the point cloud processing, transmission and archiving. This thesis introduces point cloud processing methods as a novel framework for the digitisation of archeological excavations. The framework replaces the conventional documentation methods for excavation sites. It employs point clouds for the generation of the digital documentation of an excavation with the help of an archeologist on-site. The 3D point cloud is used not only for data representation but also for analysis and knowledge generation. Finally, this thesis presents an autonomous indoor mobile mapping system. The mapping system focuses on the sensor placement planning method. Capturing a complete environment requires several scans. The sensor placement planning method solves for the minimum required scans to digitise large environments. Combining this method with a navigation system on a mobile robot platform enables it to acquire data fully autonomously. This thesis introduces a novel hole detection method for point clouds to detect obscured parts of a captured environment. The sensor placement planning method selects the next scan position with the most coverage of the obscured environment. This reduces the required number of scans. The navigation system on the robot platform consist of path planning, path following and obstacle avoidance. This guarantees the safe navigation of the mobile robot platform between the scan positions. The sensor placement planning method is designed as a stand alone process that could be used with a mobile robot platform for autonomous mapping of an environment or as an assistant tool for the surveyor on scanning projects.
Das stochastische Denken, die Bernoullische Stochastik und dessen informationstechnologische Umsetzung, namens Stochastikon stellen die Grundlage für das Verständnis und die erfolgreiche Nutzung einer stochastischen Wissenschaft dar. Im Rahmen dieser Arbeit erfolgt eine Klärung des Begriffs des stochastischen Denkens, eine anschauliche Darstellung der von Elart von Collani entwickelten Bernoullischen Stochastik und eine Beschreibung von Stochastikon. Dabei werden sowohl das Gesamtkonzept von Stochastikon, sowie die Ziele, Aufgaben und die Realisierung der beiden Teilsysteme namens Mentor und Encyclopedia vorgestellt. Das stochastische Denken erlaubt eine realitätsnahe Sichtweise der Dinge, d.h. eine Sichtweise, die mit den menschlichen Beobachtungen und Erfahrungen im Einklang steht und somit die Unsicherheit über zukünftige Entwicklungen berücksichtigt. Der in diesem Kontext verwendete Begriff der Unsicherheit bezieht sich ausschließlich auf zukünftige Entwicklungen und äußert sich in Variabilität. Quellen der Unsicherheit sind einerseits die menschliche Ignoranz und andererseits der Zufall. Unter Ignoranz wird hierbei die Unwissenheit des Menschen über die unbekannten, aber feststehenden Fakten verstanden, die die Anfangsbedingungen der zukünftigen Entwicklung repräsentieren. Die Bernoullische Stochastik liefert ein Regelwerk und ermöglicht die Entwicklung eines quantitativen Modells zur Beschreibung der Unsicherheit und expliziter Einbeziehung der beiden Quellen Ignoranz und Zufall. Das Modell trägt den Namen Bernoulli-Raum und bildet die Grundlage für die Herleitung quantitativer Verfahren, um zuverlässige und genaue Aussagen sowohl über die nicht-existente zufällige Zukunft (Vorhersageverfahren), als auch über die unbekannte feststehende Vergangenheit (Messverfahren). Das Softwaresystem Stochastikon implementiert die Bernoullische Stochastik in Form einer Reihe autarker, miteinander kommunizierender Teilsysteme. Ziel des Teilsystems Encyclopedia ist die Bereitstellung und Bewertung stochastischen Wissens. Das Teilsystem Mentor dient der Unterstützung des Anwenders bei der Problemlösungsfindung durch Identifikation eines richtigen Modells bzw. eines korrekten Bernoulli-Raums. Der Lösungsfindungsprozess selber enthält keinerlei Unsicherheit. Die ganze Unsicherheit steckt in der Lösung, d.h. im Bernoulli-Raum, der explizit die vorhandene Unwissenheit (Ignoranz) und den vorliegenden Zufall abdeckend enthält.