← Back to Benchmarks
simmediumoffline-rlmetric · varies

From Robotics to Sepsis Treatment: Offline RL via Geometric Pessimism

Description

Offline Reinforcement Learning (RL) promises the recovery of optimal policies from static datasets, yet it remains susceptible to the overestimation of out-of-distribution (OOD) actions, particularly in fractured and sparse data manifolds. Current solutions necessitate a trade-off between computational efficiency and performance. Methods like CQL offer rigorous conservatism but require tremendous compute power while efficient expectile-based methods like IQL often fail to correct OOD errors on p

Source

http://arxiv.org/abs/2602.08655v2