ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-2-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 727–734, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-727-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 727–734, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-727-2020

  03 Aug 2020

03 Aug 2020

HYBRID GEOREFERENCING, ENHANCEMENT AND CLASSIFICATION OF ULTRA-HIGH RESOLUTION UAV LIDAR AND IMAGE POINT CLOUDS FOR MONITORING APPLICATIONS

N. Haala1, M. Kölle1, M. Cramer1, D. Laupheimer1, G. Mandlburger2, and P. Glira3 N. Haala et al.
  • 1Institute for Photogrammetry, University of Stuttgart, Germany
  • 2TU Wien, Department of Geodesy and Geoinformation, Wien, Austria
  • 3AIT Austrian Institute of Technology, Austria

Keywords: UAV-based LiDAR, Dense Image Matching, Hybrid Adjustment, Classification, Deformation Monitoring

Abstract. This paper presents a study on the potential of ultra-high accurate UAV-based 3D data capture by combining both imagery and LiDAR data. Our work is motivated by a project aiming at the monitoring of subsidence in an area of mixed use. Thus, it covers built-up regions in a village with a ship lock as the main object of interest as well as regions of agricultural use. In order to monitor potential subsidence in the order of 10 mm/year, we aim at sub-centimeter accuracies of the respective 3D point clouds. We show that hybrid georeferencing helps to increase the accuracy of the adjusted LiDAR point cloud by integrating results from photogrammetric block adjustment to improve the time-dependent trajectory corrections. As our main contribution, we demonstrate that joint orientation of laser scans and images in a hybrid adjustment framework significantly improves the relative and absolute height accuracies. By these means, accuracies corresponding to the GSD of the integrated imagery can be achieved. Image data can also help to enhance the LiDAR point clouds. As an example, integrating results from Multi-View Stereo potentially increases the point density from airborne LiDAR. Furthermore, image texture can support 3D point cloud classification. This semantic segmentation discussed in the final part of the paper is a prerequisite for further enhancement and analysis of the captured point cloud.