← Back to Benchmarks
simmediumoffline-rlmetric · varies

Diffusion Policies with Value-Conditional Optimization for Offline Reinforcement Learning

Description

In offline reinforcement learning, value overestimation caused by out-of-distribution (OOD) actions significantly limits policy performance. Recently, diffusion models have been leveraged for their strong distribution-matching capabilities, enforcing conservatism through behavior policy constraints. However, existing methods often apply indiscriminate regularization to redundant actions in low-quality datasets, resulting in excessive conservatism and an imbalance between the expressiveness and e

Source

http://arxiv.org/abs/2511.08922v1