POSITION-SENSITIVE ATTENTION BASED ON FULLY CONVOLUTIONAL NEURAL NETWORKS FOR LAND COVER CLASSIFICATION
Keywords: land cover classification, semantic segmentation, skip connection, position-sensitive attention, remote sensing images
Abstract. Pixel-wise land cover classification is a fundamental task in remote sensing image interpretation, aiming to identify planimetric features (e.g., trees, waters, buildings etc.) from earth's surface. Recently, deep learning methods based on fully convolutional neural networks (FCN) become the mainstream approach for land cover classification, thanks to their superior performance in the image context perception and features learning. However, for high-resolution remote sensing images with huge quantity of object details, some deep learning based methods often ignore many important details by nature, specially, in the procedure of pooling operation and stacking convolutions in conventional FCN, it can leads to ambiguous classification of adjacent objects. To refine lost details caused by the stacking convolutions, we propose a position-sensitive attention (PSA) based on skip connections for land cover classification with high-resolution remote sensing images, which designs to deliver a weight that is sensitive to the spatial details in remote sensing images, the PSA module is able to improve pixel-level details scattered across spatial positions. Experimental results demonstrate that our method can be feasible to existing FCN-based models, 1% improvement in F1-score is obtained on 2021 "Shengteng Cup" competition dataset after using PSA, when comparing to several state-of-the-art methods, similar or even better performance is achieved on the ISPRS Vaihingen 2D dataset, but with less parameters.