OGER: A Robust Offline-Guided Exploration Reward for Hybrid Reinforcement Learning

2026-04-20Artificial Intelligence

Artificial Intelligence
AI summary

The authors developed a new method called OGER to help large language models get better at reasoning by encouraging them to explore new ideas more effectively. They combined teaching from multiple pre-trained models with a system that rewards the model for trying out novel paths during learning. Their experiments showed that OGER improved mathematical reasoning skills and still worked well on other types of problems. They also carefully studied how different parts of their system contributed to the improvements.

Reinforcement LearningLarge Language ModelsReward ModelingOffline Teacher GuidanceEntropyExplorationMathematical ReasoningGeneralizationAblation Study
Authors
Xinyu Ma, Mingzhou Xu, Xuebo Liu, Chang Jin, Qiang Wang, Derek F. Wong, Min Zhang
Abstract
Recent advancements in Reinforcement Learning with Verifiable Rewards (RLVR) have significantly improved Large Language Model (LLM) reasoning, yet models often struggle to explore novel trajectories beyond their initial latent space. While offline teacher guidance and entropy-driven strategies have been proposed to address this, they often lack deep integration or are constrained by the model's inherent capacity. In this paper, we propose OGER, a novel framework that unifies offline teacher guidance and online reinforcement learning through a specialized reward modeling lens. OGER employs multi-teacher collaborative training and constructs an auxiliary exploration reward that leverages both offline trajectories and the model's own entropy to incentivize autonomous exploration. Extensive experiments across mathematical and general reasoning benchmarks demonstrate that OGER significantly outperforms competitive baselines, achieving substantial gains in mathematical reasoning while maintaining robust generalization to out-of-domain tasks. We provide a comprehensive analysis of training dynamics and conduct detailed ablation studies to validate the effectiveness of our entropy-aware reward modulation. Our code is available at https://github.com/ecoli-hit/OGER.git.