Automated Test Suite Enhancement Using Large Language Models with Few-shot Prompting

2026-02-12Software Engineering

Software Engineering
AI summary

The authors studied how large language models (LLMs) like GPT-4o can generate unit tests for code when given a few example tests (few-shot prompting). They compared examples from humans, traditional search-based testing tools, and other LLMs to see which helps produce better tests. Their experiments showed that using human-written examples leads to the best test correctness and coverage. They also found that choosing examples similar to both the problem description and the code makes the LLM-generated tests more effective. The work highlights how combining human and AI efforts can improve test quality in software projects.

unit testinglarge language modelsfew-shot promptingsearch-based software testingtest coveragetest correctnessprompt engineeringprogram comprehensionGPT-4ohuman-AI collaboration
Authors
Alex Chudic, Gül Çalıklı
Abstract
Unit testing is essential for verifying the functional correctness of code modules (e.g., classes, methods), but manually writing unit tests is often labor-intensive and time-consuming. Unit tests generated by tools that employ traditional approaches, such as search-based software testing (SBST), lack readability, naturalness, and practical usability. LLMs have recently provided promising results and become integral to developers' daily practices. Consequently, software repositories now include a mix of human-written tests, LLM-generated tests, and those from tools employing traditional approaches such as SBST. While LLMs' zero-shot capabilities have been widely studied, their few-shot learning potential for unit test generation remains underexplored. Few-shot prompting enables LLMs to learn from examples in the prompt, and automatically retrieving such examples could enhance test suites. This paper empirically investigates how few-shot prompting with different test artifact sources, comprising human, SBST, or LLM, affects the quality of LLM-generated unit tests as program comprehension artifacts and their contribution to improving existing test suites by evaluating not only correctness and coverage but also readability, cognitive complexity, and maintainability in hybrid human-AI codebases. We conducted experiments on HumanEval and ClassEval datasets using GPT-4o, which is integrated into GitHub Copilot and widely used among developers. We also assessed retrieval-based methods for selecting relevant examples. Our results show that LLMs can generate high-quality tests via few-shot prompting, with human-written examples producing the best coverage and correctness. Additionally, selecting examples based on the combined similarity of problem description and code consistently yields the most effective few-shot prompts.