← Back to Benchmarks
simmediumoffline-rlmetric · varies

Sample-Efficient Tabular Self-Play for Offline Robust Reinforcement Learning

Description

Multi-agent reinforcement learning (MARL), as a thriving field, explores how multiple agents independently make decisions in a shared dynamic environment. Due to environmental uncertainties, policies in MARL must remain robust to tackle the sim-to-real gap. We focus on robust two-player zero-sum Markov games (TZMGs) in offline settings, specifically on tabular robust TZMGs (RTZMGs). We propose a model-based algorithm (\textit{RTZ-VI-LCB}) for offline RTZMGs, which is optimistic robust value iter

Source

http://arxiv.org/abs/2512.00352v1