ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-2-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 533–540, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-533-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 533–540, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-533-2020

  03 Aug 2020

03 Aug 2020

GLOBAL MESSAGE PASSING IN NETWORKS VIA TASK-DRIVEN RANDOM WALKS FOR SEMANTIC SEGMENTATION OF REMOTE SENSING IMAGES

L. Mou1, Y. Hua1,2, P. Jin2, and X. X. Zhu1,2 L. Mou et al.
  • 1Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Wessling, Germany
  • 2Signal Processing in Earth Observation, Technical University of Munich (TUM), Munich, Germany

Keywords: deep learning, global message passing, random walking, semantic segmentation, remote sensing

Abstract. The capability of globally modeling and reasoning about relations between image regions is crucial for complex scene understanding tasks such as semantic segmentation. Most current semantic segmentation methods fall back on deep convolutional neural networks (CNNs), while their use of convolutions with local receptive fields is typically inefficient at capturing long-range dependencies. Recent works on self-attention mechanisms and relational reasoning networks seek to address this issue by learning pairwise relations between each two entities and have showcased promising results. But such approaches have heavy computational and memory overheads, which is computationally infeasible for dense prediction tasks, particularly on large size images, i.e., aerial imagery. In this work, we propose an efficient method for global context modeling in which at each position, a sparse set of features, instead of all features, over the spatial domain are adaptively sampled and aggregated. We further devise a highly efficient instantiation of the proposed method, namely learning RANdom walK samplIng aNd feature aGgregation (RANKING). The proposed module is lightweight and general, which can be used in a plug-and-play fashion with the existing fully convolutional neural network (FCN) framework. To evaluate RANKING-equipped networks, we conduct experiments on two aerial scene parsing datasets, and the networks can achieve competitive results at significant low costs in terms of the computational and memory.