← Back to Benchmarks
simmediumroboticsmetric · varies

Pixel-level Scene Understanding in One Token: Visual States Need What-is-Where Composition

Description

For robotic agents operating in dynamic environments, learning visual state representations from streaming video observations is essential for sequential decision making. Recent self-supervised learning methods have shown strong transferability across vision tasks, but they do not explicitly address what a good visual state should encode. We argue that effective visual states must capture what-is-where by jointly encoding the semantic identities of scene elements and their spatial locations, ena

Source

http://arxiv.org/abs/2603.13904v2