← Back to Benchmarks
simmediummobile-manipulationmetric · varies
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Description
Vision-Language-Action (VLA) models extend vision-language models to embodied control by mapping natural-language instructions and visual observations to robot actions. Despite their capabilities, VLA systems face significant challenges due to their massive computational and memory demands, which conflict with the constraints of edge platforms such as on-board mobile manipulators that require real-time performance. Addressing this tension has become a central focus of recent research. In light o