ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume V-3-2020
https://doi.org/10.5194/isprs-annals-V-3-2020-595-2020
https://doi.org/10.5194/isprs-annals-V-3-2020-595-2020
03 Aug 2020
 | 03 Aug 2020

COMPUTER VISION IN THE TELEOPERATION OF THE YUTU-2 ROVER

J. Wang, J. Li, S. Wang, T. Yu, Z. Rong, X. He, Y. You, Q. Zou, W. Wan, Y. Wang, S. Gou, B. Liu, M. Peng, K. Di, Z. Liu, M. Jia, X. Xin, Y. Chen, X. Cheng, X. Feng, C. Liu, S. Han, and X. Liu

Keywords: Chang'e-4, Landing Point Positioning, Rover Localization, Terrain Reconstruction, Path Planning, Terrain Occlusion Analysis

Abstract. On January 3, 2019, the Chang'e-4 (CE-4) probe successfully landed in the Von Kármán crater inside the South Pole-Aitken (SPA) basin. With the support of a relay communication satellite "Queqiao" launched in 2018 and located at the Earth-Moon L2 liberation point, the lander and the Yutu-2 rover carried out in-situ exploration and patrol surveys, respectively, and were able to make a series of important scientific discoveries. Owing to the complexity and unpredictability of the lunar surface, teleoperation has become the most important control method for the operation of the rover. Computer vision is an important technology to support the teleoperation of the rover. During the powered descent stage and lunar surface exploration, teleoperation based on computer vision can effectively overcome many technical challenges, such as fast positioning of the landing point, high-resolution seamless mapping of the landing site, localization of the rover in the complex environment on the lunar surface, terrain reconstruction, and path planning. All these processes helped achieve the first soft landing, roving, and in-situ exploration on the lunar farside.

This paper presents a high-precision positioning technology and positioning results of the landing point based on multi-source data, including orbital images and CE-4 descent images. The method and its results have been successfully applied in an actual engineering mission for the first time in China, providing important support for the topographical analysis of the landing site and mission planning for subsequent teleoperations. After landing, a 0.03 m resolution DOM was generated using the descent images and was used as one of the base maps for the overall rover path planning. Before each movement, the Yutu-2 rover controlled its hazard avoidance cameras (Hazcam), navigation cameras (Navcam), and panoramic cameras (Pancam) to capture stereo images of the lunar surface at different angles. Local digital elevation models (DEMs) with a 0.02 m resolution were routinely produced at each waypoint using the Navcam and Hazcam images. These DEMs were then used to design an obstacle recognition method and establish a model for calculating the slope, aspect, roughness, and visibility. Finally, in combination with the Yutu-2 rover mobility characteristics, a comprehensive cost map for path search was generated.

By the end of the first 12 lunar days, the Yutu-2 rover has been working on the lunar farside for more than 300 days, greatly exceeding the projected service life. The rover was able to overcome the complex terrain on the lunar farside, and travelled a total distance of more than 300 m, achieving the "double three hundred" breakthrough. In future manned lunar landing and exploration of Mars by China, computer vision will play an integral role to support science target selection and scientific investigations, and will become an extremely important core technology for various engineering tasks.