← Back to Benchmarks
simmediumatarimetric · varies

Eau De $Q$-Network: Adaptive Distillation of Neural Networks in Deep Reinforcement Learning

Description

Recent works have successfully demonstrated that sparse deep reinforcement learning agents can be competitive against their dense counterparts. This opens up opportunities for reinforcement learning applications in fields where inference time and memory requirements are cost-sensitive or limited by hardware. Until now, dense-to-sparse methods have relied on hand-designed sparsity schedules that are not synchronized with the agent's learning pace. Crucially, the final sparsity level is chosen as

Source

http://arxiv.org/abs/2503.01437v2