QED-Nano: Teaching a Tiny Model to Prove Hard Theorems

2026-04-06Artificial Intelligence

Artificial IntelligenceComputation and LanguageMachine Learning
AI summary

The authors explore whether small, open AI models can solve very hard math problems like those in the International Mathematical Olympiad. They developed QED-Nano, a 4-billion parameter model, and improved it through three training steps: learning from a larger model, using reinforcement learning with feedback, and breaking down long proofs into smaller parts for better reasoning. Their model performs better than many larger open models and nearly matches some private, expensive systems but runs much cheaper. They also shared all their models, data, and code to help others study and improve open math reasoning AI.

International Mathematical OlympiadAI reasoningproof generationsupervised fine-tuningreinforcement learningrubric-based rewardmodel post-trainingQED-Nanoopen source AIiterative refinement
Authors
LM-Provers, Yuxiao Qu, Amrith Setlur, Jasper Dekoninck, Edward Beeching, Jia Li, Ian Wu, Lewis Tunstall, Aviral Kumar
Abstract
Proprietary AI systems have recently demonstrated impressive capabilities on complex proof-based problems, with gold-level performance reported at the 2025 International Mathematical Olympiad (IMO). However, the training pipelines behind these systems remain largely undisclosed, and their reliance on large "internal" models and scaffolds makes them expensive to run, difficult to reproduce, and hard to study or improve upon. This raises a central question: can small, open models also be trained to achieve competitive reasoning performance on difficult Olympiad-level math? In this paper, we answer this question by building QED-Nano, a 4B model post-trained for Olympiad-level proofs. Our training recipe has three stages: (1) supervised fine-tuning to imbue good proof-writing styles by distilling from DeepSeek-Math-V2, (2) reinforcement learning (RL) with rubric-based rewards, and (3) expanding RL with a reasoning cache, which decomposes long proofs into iterative summarize-and-refine cycles and enables stronger test-time reasoning. QED-Nano surpasses the proof-generation performance of much larger open models, including Nomos-1 and GPT-OSS-120B, and approaches the performance of proprietary models like Gemini 3 Pro, at a fraction of the inference cost. To support further research on open mathematical reasoning, we release the full QED-Nano pipeline, including the QED-Nano and QED-Nano-SFT models, the FineProofs-SFT and FineProofs-RL datasets, and the training and evaluation code.