🤖 AI Summary
This study investigates how synthesizing novel viewpoints can enhance visual place recognition (VPR) performance, particularly in cooperative navigation scenarios involving ground and aerial robots. The authors systematically evaluate the impact of various viewpoint synthesis strategies by integrating seven representative image similarity methods across five public VPR datasets. For the first time, they quantitatively analyze the combined effects of the number of synthesized views, the magnitude of viewpoint variation, and image type on VPR accuracy. Experimental results demonstrate that even a small number of synthesized views significantly improves recognition performance. Moreover, as the volume of synthesized data increases, the number of views and image type become dominant factors, surpassing the influence of viewpoint variation magnitude.
📝 Abstract
The generation of synthetic novel views has the potential to positively impact robot navigation in several ways. In image-based navigation, a novel overhead view generated from a scene taken by a ground robot could be used to guide an aerial robot to that location. In Video Place Recognition (VPR), novel views of ground locations from the air can be added that enable a UAV to identify places seen by the ground robot, and similarly, overhead views can be used to generate novel ground views. This paper presents a systematic evaluation of synthetic novel views in VPR using five public VPR image databases and seven typical image similarity methods. We show that for small synthetic additions, novel views improve VPR recognition statistics. We find that for larger additions, the magnitude of viewpoint change is less important than the number of views added and the type of imagery in the dataset.