← Back to Benchmarks
simmediumatarimetric · varies
Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep Reinforcement Learning
Description
Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using next state's value function. $λ$-returns generalize beyond 1-step returns and strike a balance between Monte Carlo and TD learning methods. While lambda-returns have been extensively studied in RL, they haven't been ex