ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-3-2022
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2022, 511–516, 2022
https://doi.org/10.5194/isprs-annals-V-3-2022-511-2022
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-3-2022, 511–516, 2022
https://doi.org/10.5194/isprs-annals-V-3-2022-511-2022
 
17 May 2022
17 May 2022

PIXEL-RESOLUTION DTM GENERATION FOR THE LUNAR SURFACE BASED ON A COMBINED DEEP LEARNING AND SHAPE-FROM-SHADING (SFS) APPROACH

H. Chen1, X. Hu1, and J. Oberst1,2 H. Chen et al.
  • 1Institute of Geodesy and Geoinformation Science, Technische Universität Berlin, Kaiserin-Augusta-Allee 104-106, 10553 Berlin, Germany
  • 2Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstr. 2, 12489 Berlin, Germany

Keywords: Pixel-resolution DTM, Lunar Surface, Deep Learning, Shape from Shading, Convolution Neural Network

Abstract. High-resolution Digital Terrain Models (DTMs) of the lunar surface can provide crucial spatial information for lunar exploration missions. In this paper, we propose a method to generate high-quality DTMs based on a synthesis of deep learning and Shape from Shading (SFS) with a Lunar Reconnaissance Orbiter Narrow Angle Camera (LROC NAC) image as well as a coarse-resolution DTM as input. Specifically, we use a Convolutional Neural Network (CNN)-based deep learning architecture to predict initial pixel-resolution DTMs. Then, we use SFS to improve the details of DTMs. The CNN-model is trained based on the dataset with 30, 000 samples, which are formed by stereo-photogrammetry derived DTMs and orthoimages using LROC NAC images as well as the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM). We take Chang’E-3 landing site as an example, and use a 1.6 m resolution LROC NAC image and 5 m resolution stereo-photogrammetry derived DTM as input to test the proposed method. We evaluate our DTMs with those from stereo-photogrammetry and deep learning. The result shows the proposed method can generate 1.6 m resolution high-quality DTMs, which can clearly improve the visibility of details of the initial DTM generated from the deep learning method.