ConceptCoder: Improve Code Reasoning via Concept Learning
2026-03-24 • Software Engineering
Software Engineering
AI summaryⓘ
The authors present ConceptCoder, a method that helps language models better understand and find security problems in code by first teaching them to recognize important parts, called 'concepts.' This approach mimics how humans inspect code by breaking it down into meaningful pieces before making decisions. Their experiments show this method improves vulnerability detection accuracy across several language models and also helps with other coding tasks like branch prediction. They provide their code and datasets for others to use.
large language modelsvulnerability detectioncode reasoningfine-tuningsemantic conceptssoftware engineeringbranch predictionCWEmultimodal models
Authors
Md Mahbubur Rahman, Hengbo Tong, Wei Le
Abstract
Large language models (LLMs) have shown promising results for software engineering applications, but still struggle with code reasoning tasks such as vulnerability detection (VD). We introduce ConceptCoder, a fine-tuning method that simulates human code inspection: models are trained to first recognize code concepts and then perform reasoning on top of these concepts. In prior work, concepts are extracted by multimodal models or LLMs to explain vision and natural language models. Our work is the first to formulate concepts for code. We define code concepts as human-understandable semantic properties of code and train models to learn such concepts. Our evaluation shows that this approach significantly improves VD accuracy, from 66.32 to 72.15 F1 on average over 9 open-source LLMs. ConceptCoder achieves the best VD performance compared to state-of-the-art (SOTA) baselines, including fine-tuned SOTA open-source LLMs and prompted proprietary models such as GPT-5.2 and Claude-Opus-4.5. Our approach also scales: concepts defined from four types of vulnerabilities benefit general vulnerability datasets with 134 CWEs. We further demonstrate that concept-based fine-tuning generalizes beyond VD and improves branch prediction. We release our code and datasets at https://figshare.com/s/1decab8232c653b44f71.