ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-2-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 443–449, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-443-2020
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., V-2-2020, 443–449, 2020
https://doi.org/10.5194/isprs-annals-V-2-2020-443-2020

  03 Aug 2020

03 Aug 2020

GENERATING SYNTHETIC TRAINING DATA FOR OBJECT DETECTION USING MULTI-TASK GENERATIVE ADVERSARIAL NETWORKS

Y. Lin, K. Suzuki, H. Takeda, and K. Nakamura Y. Lin et al.
  • Dept. Of R&D, Kokusai Kogyo Co., Ltd., 2-24-1 Harumi-Cho, Fuchu-Shi, Tokyo, 183-0057, Japan

Keywords: Mobile Mapping System, Object Detection, Convolutional Neural Networks, Generative Adversarial Networks, Multi-Task Training, Synthetic to Real

Abstract. Nowadays, digitizing roadside objects, for instance traffic signs, is a necessary step for generating High Definition Maps (HD Map) which remains as an open challenge. Rapid development of deep learning technology using Convolutional Neural Networks (CNN) has achieved great success in computer vision field in recent years. However, performance of most deep learning algorithms highly depends on the quality of training data. Collecting the desired training dataset is a difficult task, especially for roadside objects due to their imbalanced numbers along roadside. Although, training the neural network using synthetic data have been proposed. The distribution gap between synthetic and real data still exists and could aggravate the performance. We propose to transfer the style between synthetic and real data using Multi-Task Generative Adversarial Networks (SYN-MTGAN) before training the neural network which conducts the detection of roadside objects. Experiments focusing on traffic signs show that our proposed method can reach mAP of 0.77 and is able to improve detection performance for objects whose training samples are difficult to collect.