← Back to Benchmarks
simmediumatarimetric · varies

Regret Minimization Experience Replay in Off-Policy Reinforcement Learning

Description

In reinforcement learning, experience replay stores past samples for further reuse. Prioritized sampling is a promising technique to better utilize these samples. Previous criteria of prioritization include TD error, recentness and corrective feedback, which are mostly heuristically designed. In this work, we start from the regret minimization objective, and obtain an optimal prioritization strategy for Bellman update that can directly maximize the return of the policy. The theory suggests that

Source

http://arxiv.org/abs/2105.07253v3