ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3, 73-78, 2014
© Author(s) 2014. This work is distributed
under the Creative Commons Attribution 3.0 License.
07 Aug 2014
Object-level Segmentation of RGBD Data
H. Huang1, H. Jiang2, C. Brenner3, and H. Mayer1 1Institute of Applied Computer Science, Bundeswehr University Munich, Neubiberg, Germany
2Computer Science Department, Boston College, Chestnut Hill, MA, USA
3Institute of Cartography and Geoinformatics, Leibniz University Hannover, Hannover, Germany
Keywords: Segmentation, Point cloud, Scene, Interpretation, Image, Understanding Abstract. We propose a novel method to segment Microsoft™Kinect data of indoor scenes with the emphasis on freeform objects. We use the full 3D information for the scene parsing and the segmentation of potential objects instead of treating the depth values as an additional channel of the 2D image. The raw RGBD image is first converted to a 3D point cloud with color. We then group the points into patches, which are derived from a 2D superpixel segmentation. With the assumption that every patch in the point cloud represents (a part of) the surface of an underlying solid body, a hypothetical quasi-3D model – the "synthetic volume primitive" (SVP) is constructed by extending the patch with a synthetic extrusion in 3D. The SVPs vote for a common object via intersection. By this means, a freeform object can be "assembled" from an unknown number of SVPs from arbitrary angles. Besides the intersection, two other criteria, i.e., coplanarity and color coherence, are integrated in the global optimization to improve the segmentation. Experiments demonstrate the potential of the proposed method.
Conference paper (PDF, 4697 KB)

Citation: Huang, H., Jiang, H., Brenner, C., and Mayer, H.: Object-level Segmentation of RGBD Data, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3, 73-78, doi:10.5194/isprsannals-II-3-73-2014, 2014.

BibTeX EndNote Reference Manager XML