← Back to Benchmarks
simmediummanipulation-datametric · varies

GLaD: Geometric Latent Distillation for Vision-Language-Action Models

Description

Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vis

Source

http://arxiv.org/abs/2512.09619v1