← Back to Benchmarks
simmediumoffline-rlmetric · varies

Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning

Description

Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveragin

Source

http://arxiv.org/abs/2512.08485v2