Boosting LLMs for Mutation Generation

2026-03-25Software Engineering

Software Engineering
AI summary

The authors present SMART, a new method to improve mutation testing using large language models (LLMs). Unlike previous methods that use fixed or limited example mutations, SMART enhances quality by using real bug data, smart code chunking, and fine-tuning. Their tests on nearly 2,000 real Java bugs show that SMART creates more valid and unique mutations, detects bugs more effectively, and helps with tasks like prioritizing tests and locating faults. SMART also allows smaller models to perform as well as much larger ones. Overall, the authors demonstrate that SMART offers clear improvements over existing LLM-based mutation testing tools.

Mutation TestingLarge Language Models (LLMs)Retrieval-Augmented Generation (RAG)Fine-TuningReal-World BugsDefects4JBug Detection RateFault LocalizationOchiai CoefficientTest Case Prioritization
Authors
Bo Wang, Ming Deng, Mingda Chen, Chengran Yang, Youfang Lin, Mark Harman, Mike Papadakis, Jie M. Zhang
Abstract
LLM-based mutation testing is a promising testing technology, but existing approaches typically rely on a fixed set of mutations as few-shot examples or none at all. This can result in generic low-quality mutations, missed context-specific mutation patterns, substantial numbers of redundant and uncompilable mutants, and limited semantic similarity to real bugs. To overcome these limitations, we introduce SMART (Semantic Mutation with Adaptive Retrieval and Tuning). SMART integrates retrieval-augmented generation (RAG) on a vectorized dataset of real-world bugs, focused code chunking, and supervised fine-tuning using mutations coupled with real-world bugs. We conducted an extensive empirical study of SMART using 1,991 real-world Java bugs from the Defects4J and ConDefects datasets, comparing SMART to the state-of-the-art LLM-based approaches, LLMut and LLMorpheus. The results reveal that SMART substantially improves mutation validity, effectiveness, and efficiency (even enabling small-scale 7B-scale models to match or even surpass large models like GPT-4o). We also demonstrate that SMART significantly improves downstream software engineering applications, including test case prioritization and fault localization. More specifically, SMART improves validity (weighted average generation rate) from 42.89% to 65.6%. It raises the non-duplicate rate from 87.38% to 95.62%, and the compilable rate from 88.85% to 90.21%. In terms of effectiveness, it achieves a real bug detection rate of 92.61% (vs. 57.86% for LLMut) and improves the average Ochiai coefficient from 25.61% to 38.44%. For fault localization, SMART ranks 64 more bugs as Top-1 under MUSE and 57 more under Metallaxis.