Chart-RL: Policy Optimization Reinforcement Learning for Enhanced Visual Reasoning in Chart Question Answering with Vision Language Models
2026-04-03 • Artificial Intelligence
Artificial Intelligence
AI summaryⓘ
The authors worked on making AI models better at understanding charts and answering questions about them, which is hard because charts have numbers and relationships that are tricky to read. They created a new method called Chart-RL that teaches the AI through trial and error (reinforcement learning) to improve how it sees and reasons about charts. Their approach uses smart tuning techniques to make the training faster and work well even on a single graphics card. Testing showed their improved model was more accurate and faster than larger existing models. This means their method helps AI understand complex visual data with fewer resources.
Vision Language ModelsChart Question AnsweringReinforcement LearningPolicy OptimizationVisual PerceptionLogical InferenceLow-Rank AdaptationParameter-Efficient Fine-TuningChartQAPro datasetInference Latency
Authors
Yunfei Bai, Amit Dhanda, Shekhar Jain
Abstract
The recent advancements in Vision Language Models (VLMs) have demonstrated progress toward true intelligence requiring robust reasoning capabilities. Beyond pattern recognition, linguistic reasoning must integrate with visual comprehension, particularly for Chart Question Answering (CQA) tasks involving complex data visualizations. Current VLMs face significant limitations in CQA, including imprecise numerical extraction, difficulty interpreting implicit visual relationships, and inadequate attention mechanisms for capturing spatial relationships in charts. In this work, we address these challenges by presenting Chart-RL, a novel reinforcement learning framework that enhances VLMs chart understanding through feedback-driven policy optimization of visual perception and logical inference. Our key innovation includes a comprehensive framework integrating Reinforcement Learning (RL) from Policy Optimization techniques along with adaptive reward functions, that demonstrates superior performance compared to baseline foundation models and competitive results against larger state-of-the-art architectures. We also integrated Parameter-Efficient Fine-Tuning through Low-Rank Adaptation (LoRA) in the RL framework that only requires single GPU configurations while preserving performance integrity. We conducted extensive benchmarking across open-source, proprietary, and state-of-the-art closed-source models utilizing the ChartQAPro dataset. The RL fine-tuned Qwen3-VL-4B-Instruct model achieved an answer accuracy of 0.634, surpassing the 0.580 accuracy of the Qwen3-VL-8B-Instruct foundation model despite utilizing half the parameter count, while simultaneously reducing inference latency from 31 seconds to 9 seconds.