← Back to Benchmarks
simmediumrlmetric · varies

Stabilizing Policy Optimization via Logits Convexity

Description

While reinforcement learning (RL) has been central to the recent success of large language models (LLMs), RL optimization is notoriously unstable, especially when compared to supervised fine-tuning (SFT). In this work, we investigate the stability gap between SFT and RL from a gradient-based perspective, and show that the convexity of the SFT loss with respect to model logits plays a key role in enabling stable training. Our theoretical analysis demonstrates that this property induces favorable

Source

http://arxiv.org/abs/2603.00963v1