← Back to Benchmarks
simmediumrlmetric · varies

On the Direction of RLVR Updates for LLM Reasoning: Identification and Exploitation

Description

Reinforcement learning with verifiable rewards (RLVR) has substantially improved the reasoning capabilities of large language models. While existing analyses identify that RLVR-induced changes are sparse, they primarily focus on the \textbf{magnitude} of these updates, largely overlooking their \textbf{direction}. In this work, we argue that the direction of updates is a more critical lens for understanding RLVR's effects, which can be captured by the signed, token-level log probability differen

Source

http://arxiv.org/abs/2603.22117v1