← Back to Benchmarks
simmediumatarimetric · varies

A Temporally Correlated Latent Exploration for Reinforcement Learning

Description

Efficient exploration remains one of the longstanding problems of deep reinforcement learning. Instead of depending solely on extrinsic rewards from the environments, existing methods use intrinsic rewards to enhance exploration. However, we demonstrate that these methods are vulnerable to Noisy TV and stochasticity. To tackle this problem, we propose Temporally Correlated Latent Exploration (TeCLE), which is a novel intrinsic reward formulation that employs an action-conditioned latent space an

Source

http://arxiv.org/abs/2412.04775v1