Volume IV-1/W1
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-1/W1, 75-82, 2017
https://doi.org/10.5194/isprs-annals-IV-1-W1-75-2017
© Author(s) 2017. This work is distributed under
the Creative Commons Attribution 3.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-1/W1, 75-82, 2017
https://doi.org/10.5194/isprs-annals-IV-1-W1-75-2017
© Author(s) 2017. This work is distributed under
the Creative Commons Attribution 3.0 License.

  30 May 2017

30 May 2017

DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES

P. Biasutti1,2, J.-F. Aujol1, M. Brédif3, and A. Bugeau2 P. Biasutti et al.
  • 1Université de Bordeaux, IMB, CNRS UMR 5251, INP, 33400 Talence, France
  • 2Université de Bordeaux, LaBRI, CNRS UMR 5800, 33400 Talence, France
  • 3Université Paris-Est, LASTIG MATIS, IGN, ENSG, F-94160 Saint-Mandé, France

Keywords: LiDAR, MMS, Range Image, Disocclusion, Inpainting, Variational, Segmentation, Point Cloud

Abstract. This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.