Volume II-3/W5
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3/W5, 475-482, 2015
https://doi.org/10.5194/isprsannals-II-3-W5-475-2015
© Author(s) 2015. This work is distributed under
the Creative Commons Attribution 3.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3/W5, 475-482, 2015
https://doi.org/10.5194/isprsannals-II-3-W5-475-2015
© Author(s) 2015. This work is distributed under
the Creative Commons Attribution 3.0 License.

  20 Aug 2015

20 Aug 2015

GLOBAL AND LOCAL SPARSE SUBSPACE OPTIMIZATION FOR MOTION SEGMENTATION

M. Ying Yang1, S. Feng2, H. Ackermann2, and B. Rosenhahn2 M. Ying Yang et al.
  • 1Computer Vision Lab, TU Dresden, Dresden, Germany
  • 2Institute for Information Processing (TNT), Leibniz University Hannover, Hannover, Germany

Keywords: Motion segmentation, Affine subspace model, Sparse PCA, Subspace estimation, Optimization

Abstract. In this paper, we propose a new framework for segmenting feature-based moving objects under affine subspace model. Since the feature trajectories in practice are high-dimensional and contain a lot of noise, we firstly apply the sparse PCA to represent the original trajectories with a low-dimensional global subspace, which consists of the orthogonal sparse principal vectors. Subsequently, the local subspace separation will be achieved via automatically searching the sparse representation of the nearest neighbors for each projected data. In order to refine the local subspace estimation result, we propose an error estimation to encourage the projected data that span a same local subspace to be clustered together. In the end, the segmentation of different motions is achieved through the spectral clustering on an affinity matrix, which is constructed with both the error estimation and sparse neighbors optimization. We test our method extensively and compare it with state-of-the-art methods on the Hopkins 155 dataset. The results show that our method is comparable with the other motion segmentation methods, and in many cases exceed them in terms of precision and computation time.