CURIOSITY-DRIVEN REINFORCEMENT LEARNING AGENT FOR MAPPING UNKNOWN INDOOR ENVIRONMENTS
- 1Robotics and Mechatronics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, The Netherlands
- 2Smart Cities, School of Creative Technology, Saxion University of Applied Sciences, The Netherlands
- 3Datamanagement and Biometrics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, The Netherlands
- 4Applied Analysis, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, The Netherlands
Keywords: Reinforcement Learning, Simultaneous Localization and Mapping, Mobile Robotics, Indoor Mapping
Abstract. Autonomously exploring and mapping is one of the open challenges of robotics and artificial intelligence. Especially when the environments are unknown, choosing the optimal navigation directive is not straightforward. In this paper, we propose a reinforcement learning framework for navigating, exploring, and mapping unknown environments. The reinforcement learning agent is in charge of selecting the commands for steering the mobile robot, while a SLAM algorithm estimates the robot pose and maps the environments. The agent, to select optimal actions, is trained to be curious about the world. This concept translates into the introduction of a curiosity-driven reward function that encourages the agent to steer the mobile robot towards unknown and unseen areas of the world and the map. We test our approach in explorations challenges in different indoor environments. The agent trained with the proposed reward function outperforms the agents trained with reward functions commonly used in the literature for solving such tasks.