The estimation of the relative pose of an inactive spacecraft by an active servicer spacecraft is a critical task for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. Among all the challenges, the lack of available space images of the inactive sat
...
The estimation of the relative pose of an inactive spacecraft by an active servicer spacecraft is a critical task for close-proximity operations, such as In-Orbit Servicing and Active Debris Removal. Among all the challenges, the lack of available space images of the inactive satellite makes the on-ground validation of current monocular camera-based navigation systems a challenging task, mostly due to the fact that standard Image Processing (IP) algorithms, which are usually tested on synthetic images, tend to fail when implemented in orbit. In response to this need to guarantee a reliable validation of pose estimation systems, this paper presents the most recent advances of ESA's GNC Rendezvous, Approach and Landing Simulator (GRALS) testbed for close-proximity operations around uncooperative spacecraft. The proposed testbed is used to validate a Convolutional Neural Network (CNN)-based monocular pose estimation system on representative rendezvous scenarios with special focus on solving the domain shift problem which characterizes CNNs trained on synthetic datasets when tested on more realistic imagery. The validation of the proposed system is ensured by the introduction of a calibration framework, which returns an accurate reference relative pose between the target spacecraft and the camera for each lab-generated image, allowing a comparative assessment at a pose estimation level. The VICON Tracker System is used together with two KUKA robotic arms to respectively track and control the trajectory of the monocular camera around a scaled 1:25 mockup of the Envisat spacecraft. After an overview of the facility, this work describes a novel data augmentation technique focused on texture randomization, aimed at improving the CNN robustness against previously unseen target textures. Despite the feature detection challenges under extreme brightness and illumination conditions, the results on the high exposure scenario show that the proposed system is capable of bridging the domain shift from synthetic to lab-generated images, returning accurate pose estimates for more than 50% of the rendezvous trajectory images despite the large domain gaps in target textures and illumination conditions.
@en