ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-1-2021
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-1-2021, 47–54, 2021
https://doi.org/10.5194/isprs-annals-V-1-2021-47-2021
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-1-2021, 47–54, 2021
https://doi.org/10.5194/isprs-annals-V-1-2021-47-2021

  17 Jun 2021

17 Jun 2021

DEEPLIO: DEEP LIDAR INERTIAL SENSOR FUSION FOR ODOMETRY ESTIMATION

A. Javanmard-Gh.1, D. Iwaszczuk1, and S. Roth2 A. Javanmard-Gh. et al.
  • 1Remote Sensing and Image Analysis, Dept. of Civil and Environmental Engineering Sciences, Technical University of Darmstadt, Germany
  • 2Visual Inference Lab, Dept. of Computer Science, Technical University of Darmstadt, Germany

Keywords: Deep Learning, LiDAR Intertial Odometry, Sensor Fusion, Pose Estimation

Abstract. Having a good estimate of the position and orientation of a mobile agent is essential for many application domains such as robotics, autonomous driving, and virtual and augmented reality. In particular, when using LiDAR and IMU sensors as the inputs, most existing methods still use classical filter-based fusion methods to achieve this task. In this work, we propose DeepLIO, a modular, end-to-end learning-based fusion framework for odometry estimation using LiDAR and IMU sensors. For this task, our network learns an appropriate fusion function by considering different modalities of its input latent feature vectors. We also formulate a loss function, where we combine both global and local pose information over an input sequence to improve the accuracy of the network predictions. Furthermore, we design three sub-networks with different modules and architectures derived from DeepLIO to analyze the effect of each sensory input on the task of odometry estimation. Experiments on the benchmark dataset demonstrate that DeepLIO outperforms existing learning-based and model-based methods regarding orientation estimation and shows a marginal position accuracy difference.