SPARC: Scenario Planning and Reasoning for Automated C Unit Test Generation

2026-02-18Software Engineering

Software EngineeringArtificial Intelligence
AI summary

The authors address the difficulty of automatically generating unit tests for C programs due to complex pointer and memory rules. They created SPARC, a system that helps language models understand program structure by analyzing the code flow, using helper functions, targeting specific paths, and iteratively fixing errors with compiler feedback. In tests, SPARC produced better and more reliable test code than basic methods and matched advanced tools like KLEE, with tests that developers rated as easier to read and maintain. This approach aims to make automated testing more practical for large, existing C codebases.

Unit Test GenerationC ProgrammingLarge Language ModelsControl Flow GraphSymbolic ExecutionBranch CoverageMutation ScorePointer ArithmeticMemory ManagementNeuro-symbolic Systems
Authors
Jaid Monwar Chowdhury, Chi-An Fu, Reyhaneh Jabbarvand
Abstract
Automated unit test generation for C remains a formidable challenge due to the semantic gap between high-level program intent and the rigid syntactic constraints of pointer arithmetic and manual memory management. While Large Language Models (LLMs) exhibit strong generative capabilities, direct intent-to-code synthesis frequently suffers from the leap-to-code failure mode, where models prematurely emit code without grounding in program structure, constraints, and semantics. This will result in non-compilable tests, hallucinated function signatures, low branch coverage, and semantically irrelevant assertions that cannot properly capture bugs. We introduce SPARC, a neuro-symbolic, scenario-based framework that bridges this gap through four stages: (1) Control Flow Graph (CFG) analysis, (2) an Operation Map that grounds LLM reasoning in validated utility helpers, (3) Path-targeted test synthesis, and (4) an iterative, self-correction validation loop using compiler and runtime feedback. We evaluate SPARC on 59 real-world and algorithmic subjects, where it outperforms the vanilla prompt generation baseline by 31.36% in line coverage, 26.01% in branch coverage, and 20.78% in mutation score, matching or exceeding the symbolic execution tool KLEE on complex subjects. SPARC retains 94.3% of tests through iterative repair and produces code with significantly higher developer-rated readability and maintainability. By aligning LLM reasoning with program structure, SPARC provides a scalable path for industrial-grade testing of legacy C codebases.