ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-2-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 933–940, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-933-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 933–940, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-933-2020

  03 Aug 2020

03 Aug 2020

COMPLETION OF SPARSE AND PARTIAL POINT CLOUDS OF VEHICLES USING A NOVEL END-TO-END NETWORK

Y. Xia1, W. Liu2, Z. Luo2, Y. Xu1, and U. Stilla1 Y. Xia et al.
  • 1Photogrammetry and Remote Sensing, Technical University of Munich, 80333 Munich, Germany
  • 2Fujian Key Laboratory of Sensing and Computing, School of Informatics, Xiamen University, 361005 Xiamen, China

Keywords: Shape Completion, Uniform Point Cloud, Point Cloud Generation, 3D Reconstruction, Deep Learning

Abstract. Completing the 3D shape of vehicles from real scan data, which aims to estimate the complete geometry of vehicles from partial inputs, acts as a role in the field of remote sensing and autonomous driving. With the recent popularity of deep learning, plenty of data-driven methods have been proposed. However, most of them usually require additional information as prior knowledge for the input, for example, semantic labels and symmetry assumptions. In this paper, we design a novel and end-to-end network, termed as S2U-Net, to achieve the completion of 3D shapes of vehicles from the partial and sparse point clouds. Our network includes two modules of the encoder and the generator. The encoder is designed to extract the global feature of the incomplete and sparse point cloud while the generator is designed to produce fine-grained and dense completion. Specially, we adopt an upsampling strategy to output a more uniform point cloud. Experimental results in the KITTI dataset illustrate our method achieves better performance than the state-of-arts in terms of distribution uniformity and completion quality. Specifically, we improve the translation accuracy by 50.8% and rotation accuracy by 40.6% evaluating completed results with a point cloud registration task.