← Back to Benchmarks
simmediumlocomotionmetric · varies

Hand-Eye Autonomous Delivery: Learning Humanoid Navigation, Locomotion and Reaching

Description

We propose Hand-Eye Autonomous Delivery (HEAD), a framework that learns navigation, locomotion, and reaching skills for humanoids, directly from human motion and vision perception data. We take a modular approach where the high-level planner commands the target position and orientation of the hands and eyes of the humanoid, delivered by the low-level policy that controls the whole-body movements. Specifically, the low-level whole-body controller learns to track the three points (eyes, left hand,

Source

http://arxiv.org/abs/2508.03068v2