M2A: Synergizing Mathematical and Agentic Reasoning in Large Language Models
2026-05-11 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors address how current large language models struggle to combine two types of reasoning: math-based logic and interactive agent-style thinking. They propose M2A, a method that merges these two reasoning styles within the model parameters without retraining, by adding math reasoning features in a way that doesn’t interfere with agent behavior. This approach helps the model handle longer, more complex reasoning in coding tasks and improves performance on a coding benchmark. Their method offers an easy way to balance reasoning depth without extra training.
large language modelsmathematical reasoningagentic reasoningmodel mergingparameter spacemulti-task learninggradient updatecoding agentfine-tuningSWE-Bench
Authors
Junjian Wang, Xin Zhou, Qiran Xu, Kun Zhan
Abstract
While reasoning has become a central capability of large language models (LLMs), the reasoning patterns required for different scenarios are often misaligned. Mathematical reasoning typically relies on intrinsic logic to solve closed-world problems in a single response, whereas agentic reasoning requires not only internal reasoning but also multi-turn interaction with external environments, interleaving thought and action. This misalignment prevents mathematical and agentic reasoning from effectively benefiting from each other, often yielding unstable reasoning behavior and only limited performance gains under multi-task learning. In this paper, we propose M2A, a novel paradigm that synergizes mathematical and agentic reasoning via model merging. To avoid overfitting to superficial reasoning patterns under joint training, M2A operates directly in parameter space: it identifies the feature subspace critical for agent behavior, and merges the mathematical reasoning task vector only along its null space, thereby injecting reasoning capability along directions that do not perturb agent behavior. Unlike SFT or RL, M2A requires no additional gradient-update and exposes the merging coefficient as a simple knob for controlling reasoning length. Experiments in a challenging real-world coding agent setting show that our method effectively extends agentic reasoning depth and delivers substantial performance improvements. Applied to a fine-tuned Qwen3-8B, M2A improves its SWE-Bench Verified resolved rate from 44.0% to 51.2% without retraining the model. Code is available at https://github.com/laplucky/M2A.git.