← Back to Benchmarks
simmediumoffline-rlmetric · varies

Distributional Inverse Reinforcement Learning

Description

We propose a distributional framework for offline Inverse Reinforcement Learning (IRL) that jointly models uncertainty over reward functions and full distributions of returns. Unlike conventional IRL approaches that recover a deterministic reward estimate or match only expected returns, our method captures richer structure in expert behavior, particularly in learning the reward distribution, by minimizing first-order stochastic dominance (FSD) violations and thus integrating distortion risk meas

Source

http://arxiv.org/abs/2510.03013v2