Defective Task Descriptions in LLM-Based Code Generation: Detection and Analysis

2026-04-27Software Engineering

Software EngineeringArtificial Intelligence
AI summary

The authors developed SpecValidator, a small AI tool that checks if task descriptions for code generation are clear and detailed enough. They tested it on three common problems in descriptions and found that it identifies issues much better than some popular language models. They also discovered that the type of problem and how the task is described matter more than the power of the model itself. Finally, they noted that tasks with more background information are easier for AI to handle correctly.

Large Language ModelsCode GenerationTask DescriptionsDefect DetectionLexical VaguenessUnder-SpecificationSyntax FormattingF1 ScoreMatthews Correlation CoefficientBenchmark
Authors
Amal Akli, Mike Papadakis, Maxime Cordy, Yves Le Traon
Abstract
Large language models are widely used for code generation, yet they rely on an implicit assumption that the task descriptions are sufficiently detailed and well-formed. However, in practice, users may provide defective descriptions, which can have a strong effect on code correctness. To address this issue, we develop SpecValidator, a lightweight classifier based on a small model that has been parameter-efficiently finetuned, to automatically detect task description defects. We evaluate SpecValidator on three types of defects, Lexical Vagueness, Under-Specification and Syntax-Formatting on 3 benchmarks with task descriptions of varying structure and complexity. Our results show that SpecValidator achieves defect detection of F1 = 0.804 and MCC = 0.745, significantly outperforming GPT-5-mini (F1 = 0.469 and MCC = 0.281) and Claude Sonnet 4 (F1 = 0.518 and MCC = 0.359). Perhaps more importantly, our analysis indicates that SpecValidator can generalize to unseen issues and detect unknown Under-Specification defects in the original (real) descriptions of the benchmarks used. Our results also show that the robustness of LLMs in task description defects depends primarily on the type of defect and the characteristics of the task description, rather than the capacity of the model, with Under-Specification defects being the most severe. We further found that benchmarks with richer contextual grounding, such as LiveCodeBench, exhibit substantially greater resilience, highlighting the importance of structured task descriptions for reliable LLM-based code generation.