← Back to Benchmarks
simmediummanipulationmetric · varies

AgenticLab: A Real-World Robot Agent Platform that Can See, Think, and Act

Description

Recent advances in large vision-language models (VLMs) have demonstrated generalizable open-vocabulary perception and reasoning, yet their real-robot manipulation capability remains unclear for long-horizon, closed-loop execution in unstructured, in-the-wild environments. Prior VLM-based manipulation pipelines are difficult to compare across different research groups' setups, and many evaluations rely on simulation, privileged state, or specially designed setups. We present AgenticLab, a model-a

Source

http://arxiv.org/abs/2602.01662v3