← Back to Benchmarks
simmediumlocomotionmetric · varies

Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-functions

Description

In reinforcement learning, off-policy actor-critic methods like DDPG and TD3 use deterministic policy gradients: the Q-function is learned from environment data, while the actor maximizes it via gradient ascent. We observe that in complex tasks such as dexterous manipulation and restricted locomotion with mobility constraints, the Q-function exhibits many local optima, making gradient ascent prone to getting stuck. To address this, we introduce SAVO, an actor architecture that (i) generates mult

Source

http://arxiv.org/abs/2410.11833v2