← Back to Benchmarks
simmediumnavigationmetric · varies

GrandTour: A Legged Robotics Dataset in the Wild for Multi-Modal Perception and State Estimation

Description

Accurate state estimation and multi-modal perception are prerequisites for autonomous legged robots in complex, large-scale environments. To date, no large-scale public legged-robot dataset captures the real-world conditions needed to develop and benchmark algorithms for legged-robot state estimation, perception, and navigation. To address this, we introduce the GrandTour dataset, a multi-modal legged-robotics dataset collected across challenging outdoor and indoor environments, featuring an ANY

Source

http://arxiv.org/abs/2602.18164v2