← Back to Benchmarks
simmediumrlmetric · varies

IPD: Boosting Sequential Policy with Imaginary Planning Distillation in Offline Reinforcement Learning

Description

Decision transformer based sequential policies have emerged as a powerful paradigm in offline reinforcement learning (RL), yet their efficacy remains constrained by the quality of static datasets and inherent architectural limitations. Specifically, these models often struggle to effectively integrate suboptimal experiences and fail to explicitly plan for an optimal policy. To bridge this gap, we propose \textbf{Imaginary Planning Distillation (IPD)}, a novel framework that seamlessly incorporat

Source

http://arxiv.org/abs/2603.04289v1