ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume I-3
https://doi.org/10.5194/isprsannals-I-3-81-2012
https://doi.org/10.5194/isprsannals-I-3-81-2012
20 Jul 2012
 | 20 Jul 2012

AUTOMATIC FUSION OF PARTIAL RECONSTRUCTIONS

A. Wendel, C. Hoppe, H. Bischof, and F. Leberl

Keywords: Fusion, Reconstruction, Registration, Close Range, Aerial, Robotics, Vision

Abstract. Novel image acquisition tools such as micro aerial vehicles (MAVs) in form of quad- or octo-rotor helicopters support the creation of 3D reconstructions with ground sampling distances below 1 cm. The limitation of aerial photogrammetry to nadir and oblique views in heights of several hundred meters is bypassed, allowing close-up photos of facades and ground features. However, the new acquisition modality also introduces challenges: First, flight space might be restricted in urban areas, which leads to missing views for accurate 3D reconstruction and causes fracturing of large models. This could also happen due to vegetation or simply a change of illumination during image acquisition. Second, accurate geo-referencing of reconstructions is difficult because of shadowed GPS signals in urban areas, so alignment based on GPS information is often not possible.

In this paper, we address the automatic fusion of such partial reconstructions. Our approach is largely based on the work of (Wendel et al., 2011a), but does not require an overhead digital surface model for fusion. Instead, we exploit that patch-based semi-dense reconstruction of the fractured model typically results in several point clouds covering overlapping areas, even if sparse feature correspondences cannot be established. We approximate orthographic depth maps for the individual parts and iteratively align them in a global coordinate system. As a result, we are able to generate point clouds which are visually more appealing and serve as an ideal basis for further processing. Mismatches between parts of the fused models depend only on the individual point density, which allows us to achieve a fusion accuracy in the range of ±1 cm on our evaluation dataset.