ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume IV-2/W7
https://doi.org/10.5194/isprs-annals-IV-2-W7-111-2019
https://doi.org/10.5194/isprs-annals-IV-2-W7-111-2019
16 Sep 2019
 | 16 Sep 2019

IMAGE-TO-IMAGE TRANSLATION FOR ENHANCED FEATURE MATCHING, IMAGE RETRIEVAL AND VISUAL LOCALIZATION

M. S. Mueller, T. Sattler, M. Pollefeys, and B. Jutzi

Keywords: Image-to-Image Translation, Convolutional Neural Networks, Generative Adversarial Networks, Data Augmentation, 3D Models, Feature Matching, Image Retrieval, Visual Localization

Abstract. The performance of machine learning and deep learning algorithms for image analysis depends significantly on the quantity and quality of the training data. The generation of annotated training data is often costly, time-consuming and laborious. Data augmentation is a powerful option to overcome these drawbacks. Therefore, we augment training data by rendering images with arbitrary poses from 3D models to increase the quantity of training images. These training images usually show artifacts and are of limited use for advanced image analysis. Therefore, we propose to use image-to-image translation to transform images from a rendered domain to a captured domain. We show that translated images in the captured domain are of higher quality than the rendered images. Moreover, we demonstrate that image-to-image translation based on rendered 3D models enhances the performance of common computer vision tasks, namely feature matching, image retrieval and visual localization. The experimental results clearly show the enhancement on translated images over rendered images for all investigated tasks. In addition to this, we present the advantages utilizing translated images over exclusively captured images for visual localization.