← Back to Benchmarks
simmediumrlmetric · varies

Reinforcement Learning-based Knowledge Distillation with LLM-as-a-Judge

Description

Reinforcement Learning (RL) has been shown to substantially improve the reasoning capability of small and large language models (LLMs), but existing approaches typically rely on verifiable rewards, hence ground truth labels. We propose an RL framework that uses rewards from an LLM that acts as a judge evaluating model outputs over large amounts of unlabeled data, enabling label-free knowledge distillation and replacing the need of ground truth supervision. Notably, the judge operates with a sing

Source

http://arxiv.org/abs/2604.02621v1