FormalRewardBench: A Benchmark for Formal Theorem Proving Reward Models
2026-05-11 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors studied how computer programs check if math proofs are correct. Current methods give only yes/no answers, which makes it hard for the programs to learn from partially right proofs. To improve this, the authors created FormalRewardBench, a test set that pairs correct proofs with purposely flawed ones to see how well different models can judge proof quality. They found that big general language models do better at evaluating proofs than specialized math proof models. Their work aims to help build better reward systems for teaching computers formal math.
neural theorem provingreinforcement learningverifiable rewardscredit assignmentreward modelsformal theorem provingLean 4large language modelserror injectionproof evaluation
Authors
Zeynel A. Uluşan, Burak S. Akbudak, Can S. Erer, Gözde Gül Şahin
Abstract
Recent neural theorem provers use reinforcement learning with verifiable rewards (RLVR), where proof assistants provide binary correctness signals. While verifiable rewards are cheap and scalable without reward hacking issues, they suffer from sparse credit assignment: models receive no learning signal from difficult problems where partial progress goes unrewarded. This motivates learned reward models that can evaluate proof quality beyond binary verification. However, comparing reward models is challenging since it typically requires expensive RL training ablations. To address this, we introduce \textbf{FormalRewardBench}, the first benchmark for evaluating reward models in formal theorem proving with Lean 4. Our benchmark consists of 250 preference pairs where correct proofs are paired with incorrect variants generated through five expert curated error injection strategies: forced mistakes, minimal single-point variations, verbose incorrect proofs, natural language justification, and Python code injection. We evaluate frontier LLMs (e.g., Claude Opus 4.5), judge LLMs (e.g., CompassJudger-1-14B), general-purpose LLMs (e.g., Qwen2.5-72B-Instruct), and specialized theorem proving models (e.g., DeepSeek-Prover-V2-7B). Our results reveal that frontier LLMs achieve the highest performance (59.8\%) while specialized theorem provers perform the worst (24.4\%), suggesting that theorem proving ability does not transfer to proof evaluation. We provide further insights on various error injection mechanisms, highlighting the challenging nature of most injection mechanisms. We release \textbf{FormalRewardBench} publicly to encourage more research on developing reward models in formal mathematics.