policy
VLA-World-Model
danimelatru · PyTorch
or hover any field below to flag it
Overview
Name
VLA-World-Model
Author
danimelatru
Framework
PyTorch
License
unknown
Skill type
other
Evidence level
untested
Task description
A Modular VLA architecture that decouples reasoning (World Models) from acting (Policy) for safer, interpretable robotic control.
Spaces
Action space
other · 0-dim · 0Hz
Observation space
- type: other
Links
HuggingFace repo
null
Paper (arXiv)
null
Compatible robots
19irmvlab-unihandnot in seednvlabs-handover-simnot in seedirvlab-poseidennot in seeddenghaoyuan123-awesome-rl-vlanot in seedguoheyu-omnivlanot in seedananasburn-smolvla-manipulationnot in seedzhenyangliu-activevla-injecting-active-perception-into-vlanot in seedallenai-vla-evaluation-harnessnot in seedvlad-brasoveanu-stm32-robotic-arm-6dofnot in seedopenhelix-team-openhelixnot in seedrk-edge-simscaleainot in seedsonjuonr-visual-fusion-grunot in seedwwzzz-ilstudionot in seedabdelrahmanfarhan-vla-panda-mujoconot in seednvlabs-protomotionsnot in seedivlabs-autonomous-delivery-robotnot in seedzainali24-physical-ai-humanoid-roboticsnot in seedshiv207-vla-simulated-testsnot in seedinternrobotics-vlacnot in seed
Compatible environments
0No environments list VLA-World-Model yet.
Datasets that reference this policy
0No datasets reference VLA-World-Model yet.