Refine
Has Fulltext
- yes (17)
Is part of the Bibliography
- yes (17)
Document Type
- Doctoral Thesis (16)
- Bachelor Thesis (1)
Keywords
- Kleinsatellit (3)
- Distributed Control (2)
- Hardware (2)
- Kollisionsschutz (2)
- Raumfahrttechnik (2)
- Satellit (2)
- Software (2)
- 3D Reconstruction (1)
- 3D-Rekonstruktion (1)
- Ablaufplanung (1)
Institute
Sonstige beteiligte Institutionen
EU-Project number / Contract (GA) number
- 320377 (1)
Mini Unmanned Aerial Vehicles (MUAVs) are becoming popular research platform and
drawing considerable attention, particularly during the last decade due to their afford- ability and multi-dimensional applications in almost every walk of life. MUAVs have obvious advantages over manned platforms including their much lower manufacturing and operational costs, risk avoidance for human pilots, flying safely low and slow, and realization of operations that are beyond inherent human limitations. The advancement in Micro Electro-Mechanical System (MEMS) technology, Avionics and miniaturization of sensors also played a significant role in the evolution of MUAVs. These vehicles range from simple toys found at electronic supermarkets for entertainment purpose to highly sophisticated commercial platforms performing novel assignments like offshore wind power station inspection and 3D modelling of buildings etc. MUAVs are also more environment friendly as they cause less air pollution and noise. Unmanned is therefore unmatched. Recent research focuses on use of multiple inexpensive vehicles flying together, while maintaining required relative separations, to carry out the tasks efficiently compared to a single exorbitant vehicle. Redundancy also does away the risk of loss of a single whole-mission dependent vehicle. Some of the valuable applications in the domain of cooperative control include joint load transportation, search and rescue, mobile communication relays, pesticide spraying and weather monitoring etc. Though realization of multi-UAV coupled flight is complex, however obvious advantages justify
the laborious work involved...
Corfu is a framework for satellite software, not only for the onboard part but also for the ground. Developing software with Corfu follows an iterative model-driven approach. The basis of the process is an engineering model. Engineers formally describe the basic structure of the onboard software in configuration files, which build the engineering model. In the first step, Corfu verifies the model at different levels. Not only syntactically and semantically but also on a higher level such as the scheduling.
Based on the model, Corfu generates a software scaffold, which follows an application-centric approach. Software images onboard consist of a list of applications connected through communication channels called topics. Corfu’s generic and generated code covers this fundamental communication, telecommand, and telemetry handling. All users have to do is inheriting from a generated class and implement the behavior in overridden methods. For each application, the generator creates an abstract class with pure virtual methods. Those methods are callback functions, e.g., for handling telecommands or executing code in threads.
However, from the model, one can not foresee the software implementation by users. Therefore, as an innovation compared to other frameworks, Corfu introduces feedback from the user code back to the model. In this way, we extend the engineering model with information about functions/methods, their invocations, their stack usage, and information about events and telemetry emission. Indeed, it would be possible to add further information extraction for additional use cases. We extract the information in two ways: assembly and source code analysis. The assembly analysis collects information about the stack usage of functions and methods.
On the one side, Corfu uses the gathered information to accomplished additional verification steps, e.g., checking if stack usages exceed stack sizes of threads. On the other side, we use the gathered information to improve the performance of onboard software. In a use case, we show how the compiled binary and bandwidth towards the ground is reducible by exploiting source code information at run-time.
Since the first CubeSat launch in 2003, the hardware and software complexity of the nanosatellites was continuosly increasing.
To keep up with the continuously increasing mission complexity and to retain the primary advantages of a CubeSat mission, a new approach for the overall space and ground software architecture and protocol configuration is elaborated in this work.
The aim of this thesis is to propose a uniform software and protocol architecture as a basis for software development, test, simulation and operation of multiple pico-/nanosatellites based on ultra-low power components.
In contrast to single-CubeSat missions, current and upcoming nanosatellite formation missions require faster and more straightforward development, pre-flight testing and calibration procedures as well as simultaneous operation of multiple satellites.
A dynamic and decentral Compass mission network was established in multiple active CubeSat missions, consisting of uniformly accessible nodes.
Compass middleware was elaborated to unify the communication and functional interfaces between all involved mission-related software and hardware components.
All systems can access each other via dynamic routes to perform service-based M2M communication.
With the proposed model-based communication approach, all states, abilities and functionalities of a system are accessed in a uniform way.
The Tiny scripting language was designed to allow dynamic code execution on ultra-low power components as a basis for constraint-based in-orbit scheduler and experiment execution.
The implemented Compass Operations front-end enables far-reaching monitoring and control capabilities of all ground and space systems.
Its integrated constraint-based operations task scheduler allows the recording of complex satellite operations, which are conducted automatically during the overpasses.
The outcome of this thesis became an enabling technology for UWE-3, UWE-4 and NetSat CubeSat missions.
Miniaturized satellites on a nanosatellite scale below 10kg of total mass contribute most to the number of launched satellites into Low Earth Orbit today. This results from the potential to design, integrate and launch these space missions within months at very low costs. In the past decade, the reliability in the fields of system design, communication, and attitude control have matured to allow for competitive applications in Earth observation, communication services, and science missions. The capability of orbit control is an important next step in this development, enabling operators to adjust orbits according to current mission needs and small satellite formation flight, which promotes new measurements in various fields of space science. Moreover, this ability makes missions with altitudes above the ISS comply with planned regulations regarding collision avoidance maneuvering.
This dissertation presents the successful implementation of orbit control capabilities on the pico-satellite class for the first time. This pioneering achievement is demonstrated on the 1U CubeSat UWE–4. A focus is on the integration and operation of an electric propulsion system on miniaturized satellites. Besides limitations in size, mass, and power of a pico-satellite, the choice of a suitable electric propulsion system was driven by electromagnetic cleanliness and the use as a combined attitude and orbit control system. Moreover, the integration of the propulsion system leaves the valuable space at the outer faces of the CubeSat structure unoccupied for future use by payloads. The used NanoFEEP propulsion system consists of four thruster heads, two neutralizers and two Power Processing Units (PPUs).
The thrusters can be used continuously for 50 minutes per orbit after the liquefaction of the propellant by dedicated heaters. The power consumption of a PPU with one activated thruster, its heater and a neutralizer at emitter current levels of 30-60μA or thrust levels of 2.6-5.5μN, respectively, is in the range of 430-1050mW. Two thruster heads were activated within the scope of in-orbit experiments. The thrust direction was determined using a novel algorithm within 15.7° and 13.2° of the mounting direction. Despite limited controllability of the remaining thrusters, thrust vector pointing was achieved using the magnetic actuators of the Attitude and Orbit Control System.
In mid 2020, several orbit control maneuvers changed the altitude of UWE–4, a first for pico-satellites. During the orbit lowering scenario with a duration of ten days, a single thruster head was activated in 78 orbits for 5:40 minutes per orbit. This resulted in a reduction of the orbit altitude by about 98.3m and applied a Delta v of 5.4cm/s to UWE–4. The same thruster was activated in another experiment during 44 orbits within five days for an average duration of 7:00 minutes per orbit. The altitude of UWE–4 was increased by about 81.2m and a Delta v of 4.4cm/s was applied. Additionally, a collision avoidance maneuver was executed in July 2020, which increased the distance of closest approach to the object by more than 5000m.
Diese Forschungsarbeit beschreibt alle Aspekte der Entwicklung eines neuartigen, autonomen Quadrokopters, genannt AQopterI8, zur Innenraumerkundung. Dank seiner einzigartigen modularen Komposition von Soft- und Hardware ist der AQopterI8 in der Lage auch unter widrigen Umweltbedingungen autonom zu agieren und unterschiedliche Anforderungen zu erfüllen. Die Arbeit behandelt sowohl theoretische Fragestellungen unter dem Schwerpunkt der einfachen Realisierbarkeit als auch Aspekte der praktischen Umsetzung, womit sie Themen aus den Gebieten Signalverarbeitung, Regelungstechnik, Elektrotechnik, Modellbau, Robotik und Informatik behandelt. Kernaspekt der Arbeit sind Lösungen zur Autonomie, Hinderniserkennung und Kollisionsvermeidung.
Das System verwendet IMUs (Inertial Measurement Unit, inertiale Messeinheit) zur Orientierungsbestimmung und Lageregelung und kann unterschiedliche Sensormodelle automatisch detektieren. Ultraschall-, Infrarot- und Luftdrucksensoren in Kombination mit der IMU werden zur Höhenbestimmung und Höhenregelung eingesetzt. Darüber hinaus werden bildgebende Sensoren (Videokamera, PMD), ein Laser-Scanner sowie Ultraschall- und Infrarotsensoren zur Hindernis-erkennung und Kollisionsvermeidung (Abstandsregelung) verwendet. Mit Hilfe optischer Sensoren kann der Quadrokopter basierend auf Prinzipien der Bildverarbeitung Objekte erkennen sowie seine Position im Raum bestimmen. Die genannten Subsysteme im Zusammenspiel erlauben es dem AQopterI8 ein Objekt in einem unbekannten Raum autonom, d.h. völlig ohne jedes externe Hilfsmittel, zu suchen und dessen Position auf einer Karte anzugeben. Das System kann Kollisionen mit Wänden vermeiden und Personen autonom ausweichen. Dabei verwendet der AQopterI8 Hardware, die deutlich günstiger und Dank der Redundanz gleichzeitig erheblich verlässlicher ist als vergleichbare Mono-Sensor-Systeme (z.B. Kamera- oder Laser-Scanner-basierte Systeme).
Neben dem Zweck als Forschungsarbeit (Dissertation) dient die vorliegende Arbeit auch als Dokumentation des Gesamtprojektes AQopterI8, dessen Ziel die Erforschung und Entwicklung neuartiger autonomer Quadrokopter zur Innenraumerkundung ist. Darüber hinaus wird das System zum Zweck der Lehre und Forschung an der Universität Würzburg, der Fachhochschule Brandenburg sowie der Fachhochschule Würzburg-Schweinfurt eingesetzt. Darunter fallen Laborübungen und 31 vom Autor dieser Arbeit betreute studentische Bachelor- und Masterarbeiten.
Das Projekt wurde ausgezeichnet vom Universitätsbund und der IHK Würzburg-Mainfranken mit dem Universitätsförderpreis der Mainfränkischen Wirtschaft und wird gefördert unter den Bezeichnungen „Lebensretter mit Propellern“ und „Rettungshelfer mit Propellern“. Außerdem wurde die Arbeit für den Gips-Schüle-Preis nominiert. Absicht dieser Projekte ist die Entwicklung einer Rettungsdrohne. In den Medien Zeitung, Fernsehen und Radio wurde über den AQopterI8 schon mehrfach berichtet.
Die Evaluierung zeigt, dass das System in der Lage ist, voll autonom in Innenräumen zu fliegen, Kollisionen mit Objekten zu vermeiden (Abstandsregelung), eine Suche durchzuführen, Objekte zu erkennen, zu lokalisieren und zu zählen. Da nur wenige Forschungsarbeiten diesen Grad an Autonomie erreichen, gleichzeitig aber keine Arbeit die gestellten Anforderungen vergleichbar erfüllt, erweitert die Arbeit den Stand der Forschung.
Within this thesis a new philosophy in monitoring spacecrafts is presented: the
unification of the various kinds of monitoring techniques used during the
different lifecylce phases of a spacecraft.
The challenging requirements being set for this monitoring framework are:
- "separation of concerns" as a design principle (dividing the steps of logging
from registered sources, sending to connected sinks and displaying of
information),
- usage during all mission phases,
- usage by all actors (EGSE engineers, groundstation operators, etc.),
- configurable at runtime, especially regarding the level of detail of logging
information, and
- very low resource consumption.
First a prototype of the monitoring framework was developed as a support library
for the real-time operating system
RODOS. This prototype was tested on dedicated hardware platforms relevant for
space, and also on a satellite demonstrator used for educational purposes.
As a second step, the results and lessons learned from the development and usage
of this prototype were transfered to a real space mission: the first satellite
of the DLR compact satellite series - a space based platform for DLR's own
research activities. Within this project, the software of the avionic subsystem
was supplemented by a powerful logging component, which enhances the traditional
housekeeping capabilities and offers extensive filtering and debugging
techniques for monitoring and FDIR needs. This logging component is the major
part of the flight version of the monitoring framework. It is completed by
counterparts running on the development computers and as well as the EGSE
hardware in the integration room, making it most valuable already in the
earliest stages of traditional spacecraft development.
Future plans in terms of adding support from the groundstation as well will lead
to a seamless integration of the monitoring framework not only into to the
spacecraft itself, but into the whole space system.
This thesis describes the functional principle of FARN, a novel flight controller for Unmanned Aerial Vehicles (UAVs) designed for mission scenarios that require highly accurate and reliable navigation. The required precision is achieved by combining low-cost inertial sensors and Ultra-Wide Band (UWB) radio ranging with raw and carrier phase observations from the Global Navigation Satellite System (GNSS). The flight controller is developed within the scope of this work regarding the mission requirements of two research projects, and successfully applied under real conditions.
FARN includes a GNSS compass that allows a precise heading estimation even in environments where the conventional heading estimation based on a magnetic compass is not reliable. The GNSS compass combines the raw observations of two GNSS receivers with FARN’s real-time capable attitude determination. Thus, especially the deployment of UAVs in Arctic environments within the project for ROBEX is possible despite the weak horizontal component of the Earth’s magnetic field.
Additionally, FARN allows centimeter-accurate relative positioning of multiple UAVs in real-time. This enables precise flight maneuvers within a swarm, but also the execution of cooperative tasks in which several UAVs have a common goal or are physically coupled. A drone defense system based on two cooperative drones that act in a coordinated manner and carry a commonly suspended net to capture a potentially dangerous drone in mid-air was developed in conjunction with the
project MIDRAS.
Within this thesis, both theoretical and practical aspects are covered regarding UAV development with an emphasis on the fields of signal processing, guidance and control, electrical engineering, robotics, computer science, and programming of embedded systems. Furthermore, this work aims to provide a condensed reference for further research in the field of UAVs.
The work describes and models the utilized UAV platform, the propulsion system, the electronic design, and the utilized sensors. After establishing mathematical conventions for attitude representation, the actual core of the flight controller, namely the embedded ego-motion estimation and the principle control architecture are outlined. Subsequently, based on basic GNSS navigation algorithms, advanced carrier phase-based methods and their coupling to the ego-motion estimation framework are derived. Additionally, various implementation details and optimization steps of the system are described. The system is successfully deployed and tested within the two projects. After a critical examination and evaluation of the developed system, existing limitations and possible improvements are outlined.
This thesis deals with the first part of a larger project that follows the ultimate goal of implementing a software tool that creates a Mission Control Room in Virtual Reality. The software is to be used for the operation of spacecrafts and is specially developed for the unique real-time requirements of unmanned satellite missions. Beginning from launch, throughout the whole mission up to the recovery or disposal of the satellite, all systems need to be monitored and controlled in continuous intervals, to ensure the mission’s success. Mission Operation is an essential part of every space mission and has been undertaken for decades. Recent technological advancements in the realm of immersive technologies pave the way for innovative methods to operate spacecrafts. Virtual Reality has the capability to resolve the physical constraints set by traditional Mission Control Rooms and thereby delivers novel opportunities. The paper highlights underlying theoretical aspects of Virtual Reality, Mission Control and IP Communication. However, the focus lies upon the practical part of this thesis which revolves around the first steps of the implementation of the virtual Mission Control Room in the Unity Game Engine. Overall, this paper serves as a demonstration of Virtual Reality technology and shows its possibilities with respect to the operation of spacecrafts.
Ongoing changes in spaceflight – continuing miniaturization, declining costs of rocket launches and satellite components, and improved satellite computing and control capabilities – are advancing Satellite Formation Flying (SFF) as a research and application area. SFF enables new applications that cannot be realized (or cannot be realized at a reasonable cost) with conventional single-satellite missions. In particular, distributed Earth observation applications such as photogrammetry and tomography or distributed space telescopes require precisely placed and controlled satellites in orbit.
Several enabling technologies are required for SFF, such as inter-satellite communication, precise attitude control, and in-orbit maneuverability. However, one of the most important requirements is a reliable distributed Guidance, Navigation and Control (GNC) strategy. This work addresses the issue of distributed GNC for SFF in 3D with a focus on Continuous Low-Thrust (CLT) propulsion satellites (e.g., with electric thrusters) and concentrates on circular low Earth orbits. However, the focus of this work is not only on control theory, but control is considered as part of the system engineering process of typical small satellite missions. Thus, common sensor and actuator systems are analyzed to derive their characteristics and their impacts on formation control. This serves as the basis for the design, implementation, and evaluation of the following control approaches: First, a Model Predictive Control (MPC) method with specific adaptations to SFF and its requirements and constraints; second, a distributed robust controller that combines consensus methods for distributed system control and $H_{\infty}$ robust control; and finally, a controller that uses plant inversion for control and combines it with a reference governor to steer the controller to the target on an optimal trajectory considering several constraints. The developed controllers are validated and compared based on extensive software simulations. Realistic 3D formation flight scenarios were taken from the Networked Pico-Satellite Distributed System Control (NetSat) cubesat formation flight mission. The three compared methods show different advantages and disadvantages in the different application scenarios. The distributed robust consensus-based controller for example lacks the ability to limit the maximum thrust, so it is not suitable for satellites with CLT. But both the MPC-based approach and the plant inversionbased controller are suitable for CLT SFF applications, while showing again distinct advantages and disadvantages in different scenarios.
The scientific contribution of this work may be summarized as the creation of novel and specific control approaches for the class of CLT SFF applications, which is still lacking methods withstanding the application in real space missions, as well as the scientific evaluation and comparison of the developed methods.
An enduring engineering problem is the creation of unreliable software leading to unreliable systems. One reason for this is source code is written in a complicated manner making it too hard for humans to review and understand. Complicated code leads to other issues beyond dependability, such as expanded development efforts and ongoing difficulties with maintenance, ultimately costing developers and users more money.
There are many ideas regarding where blame lies in the reation of buggy and unreliable systems. One prevalent idea is the selected life cycle model is to blame. The oft-maligned “waterfall” life cycle model is a particularly popular recipient of blame. In response, many organizations changed their life cycle model in hopes of addressing these issues. Agile life cycle models have become very popular, and they promote communication between team members and end users. In theory, this communication leads to fewer misunderstandings and should lead to less complicated and more reliable code.
Changing the life cycle model can indeed address communications ssues, which can resolve many problems with understanding requirements.
However, most life cycle models do not specifically address coding practices or software architecture. Since lifecycle models do not address the structure of the code, they are often ineffective at addressing problems related to code complicacy.
This dissertation answers several research questions concerning software complicacy, beginning with an investigation of traditional metrics and static analysis to evaluate their usefulness as measurement tools. This dissertation also establishes a new concept in applied linguistics by creating a measurement of software complicacy based on linguistic economy. Linguistic economy describes the efficiencies of speech, and this thesis shows the applicability of linguistic economy to software. Embedded in each topic is a discussion
of the ramifications of overly complicated software, including the relationship of complicacy to software faults. Image recognition using machine learning is also investigated as a potential method of identifying problematic source code.
The central part of the work focuses on analyzing the source code of hundreds of different projects from different areas. A static analysis was performed on the source code of each project, and traditional software metrics were calculated. Programs were also analyzed using techniques developed by linguists to measure expression and statement complicacy and identifier complicacy. Professional software engineers were also directly surveyed to understand mainstream perspectives.
This work shows it is possible to use traditional metrics as indicators of potential project bugginess. This work also discovered it is possible to use image recognition to identify problematic pieces of source code. Finally, this work discovered it is possible to use linguistic methods to determine which statements and expressions are least desirable and more complicated for programmers.
This work’s principle conclusion is that there are multiple ways to discover traits indicating a project or a piece of source code has characteristics of being buggy. Traditional metrics and static analysis can be used to gain some understanding of software complicacy and bugginess potential. Linguistic economy demonstrates a new tool for measuring software complicacy, and machine learning can predict where bugs may lie in source code. The significant implication of this work is developers can recognize when a project is becoming buggy and take practical steps to avoid creating buggy projects.
Time-triggered communication is widely used throughout several industry do-
mains, primarily for reliable and real-time capable data transfers. However,
existing time-triggered technologies are designed for terrestrial usage and not
directly applicable to space applications due to the harsh environment. In-
stead, specific hardware must be developed to deal with thermal, mechanical,
and especially radiation effects.
SpaceWire, as an event-triggered communication technology, has been used
for years in a large number of space missions. Its moderate complexity, her-
itage, and transmission rates up to 400 MBits/s are one of the main ad-
vantages and often without alternatives for on-board computing systems of
spacecraft. At present, real-time data transfers are either achieved by prior-
itization inside SpaceWire routers or by applying a simplified time-triggered
approach. These solutions either imply problems if they are used inside dis-
tributed on-board computing systems or in case of networks with more than
a single router are required.
This work provides a solution for the real-time problem by developing
a novel clock synchronization approach. This approach is focused on being
compatible with distributed system structures and allows time-triggered data
transfers. A significant difference to existing technologies is the remote clock
estimation by the use of pulses. They are transferred over the network and
remove the need for latency accumulation, which allows the incorporation of
standardized SpaceWire equipment. Additionally, local clocks are controlled
decentralized and provide different correction capabilities in order to handle
oscillator induced uncertainties. All these functionalities are provided by a developed Network Controller (NC), able to isolate the attached network and
to control accesses.
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Almost once a week broadcasts about earthquakes, hurricanes, tsunamis, or forest fires are filling the news. While oneself feels it is hard to watch such news, it is even harder for rescue troops to enter such areas. They need some skills to get a quick overview of the devastated area and find victims. Time is ticking, since the chance for survival shrinks the longer it takes till help is available. To coordinate the teams efficiently, all information needs to be collected at the command center. Therefore, teams investigate the destroyed houses and hollow spaces for victims. Doing so, they never can be sure that the building will not fully collapse while they
are inside. Here, rescue robots are welcome helpers, as they are replaceable and make work more secure. Unfortunately, rescue robots are not usable off-the-shelf, yet.
There is no doubt, that such a robot has to fulfil essential requirements to successfully accomplish a rescue mission. Apart from the mechanical requirements it has to be able to build
a 3D map of the environment. This is essential to navigate through rough terrain and fulfil manipulation tasks (e.g. open doors). To build a map and gather environmental information, robots are equipped with multiple sensors. Since laser scanners produce precise measurements and support a wide scanning range, they are common visual sensors utilized for mapping.
Unfortunately, they produce erroneous measurements when scanning transparent (e.g. glass, transparent plastic) or specular reflective objects (e.g. mirror, shiny metal). It is understood that such objects can be everywhere and a pre-manipulation to prevent their influences is impossible. Using additional sensors also bear risks.
The problem is that these objects are occasionally visible, based on the incident angle of the laser beam, the surface, and the type of object. Hence, for transparent objects, measurements might result from the object surface or objects behind it. For specular reflective objects, measurements might result from the object surface or a mirrored object. These mirrored objects are illustrated behind the surface which is wrong. To obtain a precise map, the surfaces need to
be recognised and mapped reliably. Otherwise, the robot navigates into it and crashes. Further, points behind the surface should be identified and treated based on the object type. Points behind a transparent surface should remain as they represent real objects. In contrast, Points behind a specular reflective surface should be erased. To do so, the object type needs to be classified. Unfortunately, none of the current approaches is capable to fulfil these requirements.
Therefore, the following thesis addresses this problem to detect transparent and specular reflective objects and to identify their influences. To give the reader a start up, the first chapters
describe: the theoretical background concerning propagation of light; sensor systems applied for range measurements; mapping approaches used in this work; and the state-of-the-art concerning detection and identification of transparent and specular reflective objects. Afterwards, the Reflection-Identification-Approach, which is the core of subject thesis is presented. It describes 2D and a 3D implementation to detect and classify such objects. Both are available as ROS-nodes. In the next chapter, various experiments demonstrate the applicability and reliability of these nodes. It proves that transparent and specular reflective objects can be detected and classified. Therefore, a Pre- and Post-Filter module is required in 2D. In 3D, classification is possible solely with the Pre-Filter. This is due to the higher amount of measurements. An
example shows that an updatable mapping module allows the robot navigation to rely on refined maps. Otherwise, two individual maps are build which require a fusion afterwards. Finally, the
last chapter summarizes the results and proposes suggestions for future work.
Der Betrieb von Satelliten wird sich in Zukunft gravierend ändern. Die bisher ausgeübte konventionelle Vorgehensweise, bei der die Planung der vom Satelliten auszuführenden Aktivitäten sowie die Kontrolle hierüber ausschließlich vom Boden aus erfolgen, stößt bei heutigen Anwendungen an ihre Grenzen. Im schlimmsten Fall verhindert dieser Umstand sogar die Erschließung bisher ungenutzter Möglichkeiten. Der Gewinn eines Satelliten, sei es in Form wissenschaftlicher Daten oder der Vermarktung satellitengestützter Dienste, wird daher nicht optimal ausgeschöpft.
Die Ursache für dieses Problem lässt sich im Grunde auf eine ausschlaggebende Tatsache zurückführen: Konventionelle Satelliten können ihr Verhalten, d.h. die Folge ihrer Tätigkeiten, nicht eigenständig anpassen. Stattdessen erstellt das Bedienpersonal am Boden - vor allem die Operatoren - mit Hilfe von Planungssoftware feste Ablaufpläne, die dann in Form von Kommandosequenzen von den Bodenstationen aus an die jeweiligen Satelliten hochgeladen werden. Dort werden die Befehle lediglich überprüft, interpretiert und strikt ausgeführt. Die Abarbeitung erfolgt linear. Situationsbedingte Änderungen, wie sie vergleichsweise bei der Codeausführung von Softwareprogrammen durch Kontrollkonstrukte, zum Beispiel Schleifen und Verzweigungen, üblich sind, sind typischerweise nicht vorgesehen. Der Operator ist daher die einzige Instanz, die das Verhalten des Satelliten mittels Kommandierung, per Upload, beeinflussen kann, und auch nur dann, wenn ein direkter Funkkontakt zwischen Satellit und Bodenstation besteht. Die dadurch möglichen Reaktionszeiten des Satelliten liegen bestenfalls bei einigen Sekunden, falls er sich im Wirkungsbereich der Bodenstation befindet. Außerhalb des Kontaktfensters kann sich die Zeitschranke, gegeben durch den Orbit und die aktuelle Position des Satelliten, von einigen Minuten bis hin zu einigen Stunden erstrecken. Die Signallaufzeiten der Funkübertragung verlängern die Reaktionszeiten um weitere Sekunden im erdnahen Bereich. Im interplanetaren Raum erstrecken sich die Zeitspannen aufgrund der immensen Entfernungen sogar auf mehrere Minuten. Dadurch bedingt liegt die derzeit technologisch mögliche, bodengestützte, Reaktionszeit von Satelliten bestenfalls im Bereich von einigen Sekunden.
Diese Einschränkung stellt ein schweres Hindernis für neuartige Satellitenmissionen, bei denen insbesondere nichtdeterministische und kurzzeitige Phänomene (z.B. Blitze und Meteoreintritte in die Erdatmosphäre) Gegenstand der Beobachtungen sind, dar. Die langen Reaktionszeiten des konventionellen Satellitenbetriebs verhindern die Realisierung solcher Missionen, da die verzögerte Reaktion erst erfolgt, nachdem das zu beobachtende Ereignis bereits abgeschlossen ist.
Die vorliegende Dissertation zeigt eine Möglichkeit, das durch die langen Reaktionszeiten entstandene Problem zu lösen, auf. Im Zentrum des Lösungsansatzes steht dabei die Autonomie. Im Wesentlichen geht es dabei darum, den Satelliten mit der Fähigkeit auszustatten, sein Verhalten, d.h. die Folge seiner Tätigkeiten, eigenständig zu bestimmen bzw. zu ändern. Dadurch wird die direkte Abhängigkeit des Satelliten vom Operator bei Reaktionen aufgehoben. Im Grunde wird der Satellit in die Lage versetzt, sich selbst zu kommandieren.
Die Idee der Autonomie wurde im Rahmen der zugrunde liegenden Forschungsarbeiten umgesetzt. Das Ergebnis ist ein autonomes Planungssystem. Dabei handelt es sich um ein Softwaresystem, mit dem sich autonomes Verhalten im Satelliten realisieren lässt. Es kann an unterschiedliche Satellitenmissionen angepasst werden. Ferner deckt es verschiedene Aspekte des autonomen Satellitenbetriebs, angefangen bei der generellen Entscheidungsfindung der Tätigkeiten, über die zeitliche Ablaufplanung unter Einbeziehung von Randbedingungen (z.B. Ressourcen) bis hin zur eigentlichen Ausführung, d.h. Kommandierung, ab. Das Planungssystem kommt als Anwendung in ASAP, einer autonomen Sensorplattform, zum Einsatz. Es ist ein optisches System und dient der Detektion von kurzzeitigen Phänomenen und Ereignissen in der Erdatmosphäre.
Die Forschungsarbeiten an dem autonomen Planungssystem, an ASAP sowie an anderen zu diesen in Bezug stehenden Systemen wurden an der Professur für Raumfahrttechnik des Lehrstuhls Informatik VIII der Julius-Maximilians-Universität Würzburg durchgeführt.
A complete simulation system is proposed that can be used as an educational tool by physicians in training basic skills of Minimally Invasive Vascular Interventions. In the first part, a surface model is developed to assemble arteries having a planar segmentation. It is based on Sweep Surfaces and can be extended to T- and Y-like bifurcations. A continuous force vector field is described, representing the interaction between the catheter and the surface. The computation time of the force field is almost unaffected when the resolution of the artery is increased.
The mechanical properties of arteries play an essential role in the study of the circulatory system dynamics, which has been becoming increasingly important in the treatment of cardiovascular diseases. In Virtual Reality Simulators, it is crucial to have a tissue model that responds in real time. In this work, the arteries are discretized by a two dimensional mesh and the nodes are connected by three kinds of linear springs. Three tissue layers (Intima, Media, Adventitia) are considered and, starting from the stretch-energy density, some of the elasticity tensor components are calculated. The physical model linearizes and homogenizes the material response, but it still contemplates the geometric nonlinearity. In general, if the arterial stretch varies by 1% or less, then the agreement between the linear and nonlinear models is trustworthy.
In the last part, the physical model of the wire proposed by Konings is improved. As a result, a simpler and more stable method is obtained to calculate the equilibrium configuration of the wire. In addition, a geometrical method is developed to perform relaxations. It is particularly useful when the wire is hindered in the physical method because of the boundary conditions. The physical and the geometrical methods are merged, resulting in efficient relaxations. Tests show that the shape of the virtual wire agrees with the experiment. The proposed algorithm allows real-time executions and the hardware to assemble the simulator has a low cost.
Remote sensing time series is the collection or acquisition of remote sensing data in a
fixed equally spaced time period over a particular area or for the whole world. Near
daily high spatial resolution data is very much needed for remote sensing applications
such as agriculture monitoring, phenology change detection, environmental
monitoring and so on. Remote sensing applications can produce better and accurate
results if they are provided with dense and accurate time series of data. The current
remote sensing satellite architecture is still not capable of providing near daily
or daily high spatial resolution images to fulfill the needs of the above mentioned
remote sensing applications. Limitations in sensors, high development, operational
costs of satellites and presence of clouds blocking the area of observation are some
of the reasons that makes near daily or daily high spatial resolution optical remote
sensing data highly challenging to achieve. With developments in the optical sensor
systems and well planned remote sensing satellite constellations, this condition
can be improved but it comes at a cost. Even then the issue will not be completely
resolved and thus the growing need for high temporal and high spatial resolution
data cannot be fulfilled entirely. Because the data collection process relies on satellites
which are physical system, these can fail unpredictably due to various reasons
and cause a complete loss of observation for a given period of time making a gap
in the time series. Moreover, to observe the long term trend in phenology change
due to rapidly changing environmental conditions, the remote sensing data from
the present is not just sufficient, the data from the past is also important. A better
alternative solution for this issue can be the generation of remote sensing time series
by fusing data from multiple remote sensing satellite which has different spatial and
temporal resolutions. This approach will be effective and efficient. In this method
a high temporal low spatial resolution image from a satellite such as Sentinel-2 can
be fused with a low temporal and high spatial resolution image from a satellite such
as the Sentinel-3 to generate a synthetic high temporal high spatial resolution data.
Remote sensing time series generation by data fusion methods can be applied to
the satellite images captured currently as well as the images captured by the satellites
in the past. This will provide the much needed high temporal and high spatial
resolution images for remote sensing applications. This approach with its simplistic
nature is cost effective and provides the researchers the means to generate the
data needed for their application on their own from the limited source of data available
to them. An efficient data fusion approach in combination with a well planned
satellite constellation can offer a solution which will ensure near daily time series of
remote sensing data with out any gap. The aim of this research work is to develop
an efficient data fusion approaches to achieve dense remote sensing time series.
Wireless communication networks already comprise an integral part of both the private and industrial sectors and are successfully replacing existing wired networks. They enable the development of novel applications and offer greater flexibility and efficiency. Although some efforts are already underway in the aerospace sector to deploy wireless communication networks on board spacecraft, none of these projects have yet succeeded in replacing the hard-wired state-of-the-art architecture for intra-spacecraft communication. The advantages are evident as the reduction of the wiring harness saves time, mass, and costs, and makes the whole integration process more flexible. It also allows for easier scaling when interconnecting different systems.
This dissertation deals with the design and implementation of a wireless network architecture to enhance intra-spacecraft communications by breaking with the state-of-the-art standards that have existed in the space industry for decades. The potential and benefits of this novel wireless network architecture are evaluated, an innovative design using ultra-wideband technology is presented. It is combined with a Medium Access Control (MAC) layer tailored for low-latency and deterministic networks supporting even mission-critical applications. As demonstrated by the Wireless Compose experiment on the International Space Station (ISS), this technology is not limited to communications but also enables novel positioning applications.
To adress the technological challenges, extensive studies have been carried out on electromagnetic compatibility, space radiation, and data robustness. The architecture was evaluated from various perspectives and successfully demonstrated in space.
Overall, this research highlights how a wireless network can improve and potentially replace existing state-of-the-art communication systems on board spacecraft in future missions. And it will help to adapt and ultimately accelerate the implementation of wireless networks in space systems.