Refine
Has Fulltext
- yes (15)
Is part of the Bibliography
- yes (15)
Document Type
- Doctoral Thesis (14)
- Master Thesis (1)
Keywords
- Punktwolke (4)
- Bildverarbeitung (2)
- Computer Vision (2)
- Datenfusion (2)
- Deep Learning (2)
- Ernteertrag (2)
- Landwirtschaft / Nachhaltigkeit (2)
- Machine Learning (2)
- Maschinelles Lernen (2)
- Robotics (2)
Institute
Sonstige beteiligte Institutionen
There is great interest in affordable, precise and reliable metrology underwater:
Archaeologists want to document artifacts in situ with high detail.
In marine research, biologists require the tools to monitor coral growth and geologists need recordings to model sediment transport.
Furthermore, for offshore construction projects, maintenance and inspection millimeter-accurate measurements of defects and offshore structures are essential.
While the process of digitizing individual objects and complete sites on land is well understood and standard methods, such as Structure from Motion or terrestrial laser scanning, are regularly applied, precise underwater surveying with high resolution is still a complex and difficult task.
Applying optical scanning techniques in water is challenging due to reduced visibility caused by turbidity and light absorption.
However, optical underwater scanners provide significant advantages in terms of achievable resolution and accuracy compared to acoustic systems.
This thesis proposes an underwater laser scanning system and the algorithms for creating dense and accurate 3D scans in water.
It is based on laser triangulation and the main optical components are an underwater camera and a cross-line laser projector.
The prototype is configured with a motorized yaw axis for capturing scans from a tripod.
Alternatively, it is mounted to a moving platform for mobile mapping.
The main focus lies on the refractive calibration of the underwater camera and laser projector, the image processing and 3D reconstruction.
For highest accuracy, the refraction at the individual media interfaces must be taken into account.
This is addressed by an optimization-based calibration framework using a physical-geometric camera model derived from an analytical formulation of a ray-tracing projection model.
In addition to scanning underwater structures, this work presents the 3D acquisition of semi-submerged structures and the correction of refraction effects.
As in-situ calibration in water is complex and time-consuming, the challenge of transferring an in-air scanner calibration to water without re-calibration is investigated, as well as self-calibration techniques for structured light.
The system was successfully deployed in various configurations for both static scanning and mobile mapping.
An evaluation of the calibration and 3D reconstruction using reference objects and a comparison of free-form surfaces in clear water demonstrate the high accuracy potential in the range of one millimeter to less than one centimeter, depending on the measurement distance.
Mobile underwater mapping and motion compensation based on visual-inertial odometry is demonstrated using a new optical underwater scanner based on fringe projection.
Continuous registration of individual scans allows the acquisition of 3D models from an underwater vehicle.
RGB images captured in parallel are used to create 3D point clouds of underwater scenes in full color.
3D maps are useful to the operator during the remote control of underwater vehicles and provide the building blocks to enable offshore inspection and surveying tasks.
The advancing automation of the measurement technology will allow non-experts to use it, significantly reduce acquisition time and increase accuracy, making underwater metrology more cost-effective.
Accurate crop monitoring in response to climate change at a regional or field scale
plays a significant role in developing agricultural policies, improving food security,
forecasting, and analysing global trade trends. Climate change is expected to
significantly impact agriculture, with shifts in temperature, precipitation patterns, and
extreme weather events negatively affecting crop yields, soil fertility, water availability,
biodiversity, and crop growing conditions. Remote sensing (RS) can provide valuable
information combined with crop growth models (CGMs) for yield assessment by
monitoring crop development, detecting crop changes, and assessing the impact of
climate change on crop yields. This dissertation aims to investigate the potential of RS
data on modelling long-term crop yields of winter wheat (WW) and oil seed rape (OSR)
for the Free State of Bavaria (70,550 km2
), Germany. The first chapter of the dissertation
describes the reasons favouring the importance of accurate crop yield predictions for
achieving sustainability in agriculture. Chapter second explores the accuracy
assessment of the synthetic RS data by fusing NDVIs of two high spatial resolution data
(high pair) (Landsat (30 m, 16-days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low
spatial resolution data (low pair) (MOD13Q1 (250 m, 16-days), MCD43A4 (500 m, one
day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, 8-days)) using the spatial
and temporal adaptive reflectance fusion model (STARFM), which fills regions' cloud
or shadow gaps without losing spatial information. The chapter finds that both L-MOD13Q1 (R2 = 0.62, RMSE = 0.11) and S-MOD13Q1 (R2 = 0.68, RMSE = 0.13) are more
suitable for agricultural monitoring than the other synthetic products fused. Chapter
third explores the ability of the synthetic spatiotemporal datasets (obtained in chapter
2) to accurately map and monitor crop yields of WW and OSR at a regional scale. The
chapter investigates and discusses the optimal spatial (10 m, 30 m, or 250 m), temporal
(8 or 16-day) and CGMs (World Food Studies (WOFOST), and the semi-empiric light
use efficiency approach (LUE)) for accurate crop yield estimations of both crop types.
Chapter third observes that the observations of high temporal resolution (8-day)
products of both S-MOD13Q1 and L-MOD13Q1 play a significant role in accurately
measuring the yield of WW and OSR. The chapter investigates that the simple light use
efficiency (LUE) model (R2 = 0.77 and relative RMSE (RRMSE) = 8.17%) that required fewer input parameters to simulate crop yield is highly accurate, reliable, and more
precise than the complex WOFOST model (R2 = 0.66 and RRMSE = 11.35%) with higher
input parameters. Chapter four researches the relationship of spatiotemporal fusion
modelling using STRAFM on crop yield prediction for WW and OSR using the LUE
model for Bavaria from 2001 to 2019. The chapter states the high positive correlation
coefficient (R) = 0.81 and R = 0.77 between the yearly R2 of synthetic accuracy and
modelled yield accuracy for WW and OSR from 2001 to 2019, respectively. The chapter
analyses the impact of climate variables on crop yield predictions by observing an
increase in R2
(0.79 (WW)/0.86 (OSR)) and a decrease in RMSE (4.51/2.57 dt/ha) when
the climate effect is included in the model. The fifth chapter suggests that the coupling
of the LUE model to the random forest (RF) model can further reduce the relative root
mean square error (RRMSE) from -8% (WW) and -1.6% (OSR) and increase the R2 by
14.3% (for both WW and OSR), compared to results just relying on LUE. The same
chapter concludes that satellite-based crop biomass, solar radiation, and temperature
are the most influential variables in the yield prediction of both crop types. Chapter six
attempts to discuss both pros and cons of RS technology while analysing the impact of
land use diversity on crop-modelled biomass of WW and OSR. The chapter finds that
the modelled biomass of both crops is positively impacted by land use diversity to the
radius of 450 (Shannon Diversity Index ~0.75) and 1050 m (~0.75), respectively. The
chapter also discusses the future implications by stating that including some dependent
factors (such as the management practices used, soil health, pest management, and
pollinators) could improve the relationship of RS-modelled crop yields with
biodiversity. Lastly, chapter seven discusses testing the scope of new sensors such as
unmanned aerial vehicles, hyperspectral sensors, or Sentinel-1 SAR in RS for achieving
accurate crop yield predictions for precision farming. In addition, the chapter highlights
the significance of artificial intelligence (AI) or deep learning (DL) in obtaining higher
crop yield accuracies.
Accurate crop monitoring in response to climate change at a regional or field scale plays a significant role in developing agricultural policies, improving food security, forecasting, and analysing global trade trends. Climate change is expected to significantly impact agriculture, with shifts in temperature, precipitation patterns, and extreme weather events negatively affecting crop yields, soil fertility, water availability, biodiversity, and crop growing conditions. Remote sensing (RS) can provide valuable information combined with crop growth models (CGMs) for yield assessment by monitoring crop development, detecting crop changes, and assessing the impact of climate change on crop yields. This dissertation aims to investigate the potential of RS data on modelling long-term crop yields of winter wheat (WW) and oil seed rape (OSR) for the Free State of Bavaria (70,550 km2), Germany. The first chapter of the dissertation describes the reasons favouring the importance of accurate crop yield predictions for achieving sustainability in agriculture. Chapter second explores the accuracy assessment of the synthetic RS data by fusing NDVIs of two high spatial resolution data (high pair) (Landsat (30 m, 16-days; L) and Sentinel-2 (10 m, 5–6 days; S), with four low spatial resolution data (low pair) (MOD13Q1 (250 m, 16-days), MCD43A4 (500 m, one day), MOD09GQ (250 m, one-day), and MOD09Q1 (250 m, 8-days)) using the spatial and temporal adaptive reflectance fusion model (STARFM), which fills regions' cloud or shadow gaps without losing spatial information. The chapter finds that both L-MOD13Q1 (R2 = 0.62, RMSE = 0.11) and S-MOD13Q1 (R2 = 0.68, RMSE = 0.13) are more suitable for agricultural monitoring than the other synthetic products fused. Chapter third explores the ability of the synthetic spatiotemporal datasets (obtained in chapter 2) to accurately map and monitor crop yields of WW and OSR at a regional scale. The chapter investigates and discusses the optimal spatial (10 m, 30 m, or 250 m), temporal (8 or 16-day) and CGMs (World Food Studies (WOFOST), and the semi-empiric light use efficiency approach (LUE)) for accurate crop yield estimations of both crop types. Chapter third observes that the observations of high temporal resolution (8-day) products of both S-MOD13Q1 and L-MOD13Q1 play a significant role in accurately measuring the yield of WW and OSR. The chapter investigates that the simple light use efficiency (LUE) model (R2 = 0.77 and relative RMSE (RRMSE) = 8.17%) that required fewer input parameters to simulate crop yield is highly accurate, reliable, and more precise than the complex WOFOST model (R2 = 0.66 and RRMSE = 11.35%) with higher input parameters. Chapter four researches the relationship of spatiotemporal fusion modelling using STRAFM on crop yield prediction for WW and OSR using the LUE model for Bavaria from 2001 to 2019. The chapter states the high positive correlation coefficient (R) = 0.81 and R = 0.77 between the yearly R2 of synthetic accuracy and modelled yield accuracy for WW and OSR from 2001 to 2019, respectively. The chapter analyses the impact of climate variables on crop yield predictions by observing an increase in R2 (0.79 (WW)/0.86 (OSR)) and a decrease in RMSE (4.51/2.57 dt/ha) when the climate effect is included in the model. The fifth chapter suggests that the coupling of the LUE model to the random forest (RF) model can further reduce the relative root mean square error (RRMSE) from -8% (WW) and -1.6% (OSR) and increase the R2 by 14.3% (for both WW and OSR), compared to results just relying on LUE. The same chapter concludes that satellite-based crop biomass, solar radiation, and temperature are the most influential variables in the yield prediction of both crop types. Chapter six attempts to discuss both pros and cons of RS technology while analysing the impact of land use diversity on crop-modelled biomass of WW and OSR. The chapter finds that the modelled biomass of both crops is positively impacted by land use diversity to the radius of 450 (Shannon Diversity Index ~0.75) and 1050 m (~0.75), respectively. The chapter also discusses the future implications by stating that including some dependent factors (such as the management practices used, soil health, pest management, and pollinators) could improve the relationship of RS-modelled crop yields with biodiversity. Lastly, chapter seven discusses testing the scope of new sensors such as unmanned aerial vehicles, hyperspectral sensors, or Sentinel-1 SAR in RS for achieving accurate crop yield predictions for precision farming. In addition, the chapter highlights the significance of artificial intelligence (AI) or deep learning (DL) in obtaining higher crop yield accuracies.
Autonomous mobile robots operating in unknown terrain have to guide
their drive decisions through local perception. Local mapping and
traversability analysis is essential for safe rover operation and low level
locomotion. This thesis deals with the challenge of building a local,
robot centric map from ultra short baseline stereo imagery for height
and traversability estimation.
Several grid-based, incremental mapping algorithms are compared and
evaluated in a multi size, multi resolution framework. A new, covariance
based mapping update is introduced, which is capable of detecting sub-
cellsize obstacles and abstracts the terrain of one cell as a first order
surface.
The presented mapping setup is capable of producing reliable ter-
rain and traversability estimates under the conditions expected for the
Cooperative Autonomous Distributed Robotic Exploreration (CADRE)
mission.
Algorithmic- and software architecture design targets high reliability
and efficiency for meeting the tight constraints implied by CADRE’s
small on-board embedded CPU.
Extensive evaluations are conducted to find possible edge-case scenar-
ios in the operating envelope of the map and to confirm performance
parameters. The research in this thesis targets the CADRE mission, but
is applicable to any form of mobile robotics which require height- and
traversability mapping.
Deep learning enables enormous progress in many computer vision-related tasks. Artificial Intel- ligence (AI) steadily yields new state-of-the-art results in the field of detection and classification. Thereby AI performance equals or exceeds human performance. Those achievements impacted many domains, including medical applications.
One particular field of medical applications is gastroenterology. In gastroenterology, machine learning algorithms are used to assist examiners during interventions. One of the most critical concerns for gastroenterologists is the development of Colorectal Cancer (CRC), which is one of the leading causes of cancer-related deaths worldwide. Detecting polyps in screening colonoscopies is the essential procedure to prevent CRC. Thereby, the gastroenterologist uses an endoscope to screen the whole colon to find polyps during a colonoscopy. Polyps are mucosal growths that can vary in severity.
This thesis supports gastroenterologists in their examinations with automated detection and clas- sification systems for polyps. The main contribution is a real-time polyp detection system. This system is ready to be installed in any gastroenterology practice worldwide using open-source soft- ware. The system achieves state-of-the-art detection results and is currently evaluated in a clinical trial in four different centers in Germany.
The thesis presents two additional key contributions: One is a polyp detection system with ex- tended vision tested in an animal trial. Polyps often hide behind folds or in uninvestigated areas. Therefore, the polyp detection system with extended vision uses an endoscope assisted by two additional cameras to see behind those folds. If a polyp is detected, the endoscopist receives a vi- sual signal. While the detection system handles the additional two camera inputs, the endoscopist focuses on the main camera as usual.
The second one are two polyp classification models, one for the classification based on shape (Paris) and the other on surface and texture (NBI International Colorectal Endoscopic (NICE) classification). Both classifications help the endoscopist with the treatment of and the decisions about the detected polyp.
The key algorithms of the thesis achieve state-of-the-art performance. Outstandingly, the polyp detection system tested on a highly demanding video data set shows an F1 score of 90.25 % while working in real-time. The results exceed all real-time systems in the literature. Furthermore, the first preliminary results of the clinical trial of the polyp detection system suggest a high Adenoma Detection Rate (ADR). In the preliminary study, all polyps were detected by the polyp detection system, and the system achieved a high usability score of 96.3 (max 100). The Paris classification model achieved an F1 score of 89.35 % which is state-of-the-art. The NICE classification model achieved an F1 score of 81.13 %.
Furthermore, a large data set for polyp detection and classification was created during this thesis. Therefore a fast and robust annotation system called Fast Colonoscopy Annotation Tool (FastCAT) was developed. The system simplifies the annotation process for gastroenterologists. Thereby the
i
gastroenterologists only annotate key parts of the endoscopic video. Afterward, those video parts are pre-labeled by a polyp detection AI to speed up the process. After the AI has pre-labeled the frames, non-experts correct and finish the annotation. This annotation process is fast and ensures high quality. FastCAT reduces the overall workload of the gastroenterologist on average by a factor of 20 compared to an open-source state-of-art annotation tool.