Volume II-1
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-1, 53-60, 2014
https://doi.org/10.5194/isprsannals-II-1-53-2014
© Author(s) 2014. This work is distributed under
the Creative Commons Attribution 3.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-1, 53-60, 2014
https://doi.org/10.5194/isprsannals-II-1-53-2014
© Author(s) 2014. This work is distributed under
the Creative Commons Attribution 3.0 License.

  07 Nov 2014

07 Nov 2014

Thermal 3D mapping for object detection in dynamic scenes

M. Weinmann1, J. Leitloff1, L. Hoegner2, B. Jutzi1, U. Stilla2, and S. Hinz1 M. Weinmann et al.
  • 1Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
  • 2Photogrammetry and Remote Sensing, Technische Universität München (TUM), München, Germany

Keywords: Multisensor, point cloud, thermal imaging, 3D mapping, dynamic, object detection

Abstract. The automatic analysis of 3D point clouds has become a crucial task in photogrammetry, remote sensing and computer vision. Whereas modern range cameras simultaneously provide both range and intensity images with high frame rates, other devices can be used to obtain further information which could be quite valuable for tasks such as object detection or scene interpretation. In particular thermal information offers many advantages, since people can easily be detected as heat sources in typical indoor or outdoor environments and, furthermore, a variety of concealed objects such as heating pipes as well as structural properties such as defects in isolation may be observed. In this paper, we focus on thermal 3D mapping which allows to observe the evolution of a dynamic 3D scene over time. We present a fully automatic methodology consisting of four successive steps: (i) a radiometric correction, (ii) a geometric calibration, (iii) a robust approach for detecting reliable feature correspondences and (iv) a co-registration of 3D point cloud data and thermal information via a RANSAC-based EPnP scheme. For an indoor scene, we demonstrate that our methodology outperforms other recent approaches in terms of both accuracy and applicability. We additionally show that efficient straightforward techniques allow a categorization according to background, people, passive scene manipulation and active scene manipulation.