Incremental Neural Network Verification via Learned Conflicts

2026-03-12Logic in Computer Science

Logic in Computer ScienceArtificial Intelligence
AI summary

The authors address how neural network verification tasks often repeat similar work when done one after another. They propose a method that remembers past conflicts or problems found during previous checks, so the verifier doesn't waste time exploring the same impossible cases again. This approach can work with existing verification tools and helps speed up the process. Their tests show it can make verification almost twice as fast in some cases.

Neural network verificationIncremental verificationBranch-and-boundConflict learningActivation phasesSAT solverLocal robustnessInput splittingFeature set extractionMarabou verifier
Authors
Raya Elsaleh, Liam Davis, Haoze Wu, Guy Katz
Abstract
Neural network verification is often used as a core component within larger analysis procedures, which generate sequences of closely related verification queries over the same network. In existing neural network verifiers, each query is typically solved independently, and information learned during previous runs is discarded, leading to repeated exploration of the same infeasible regions of the search space. In this work, we aim to expedite verification by reducing this redundancy. We propose an incremental verification technique that reuses learned conflicts across related verification queries. The technique can be added on top of any branch-and-bound-based neural network verifier. During verification, the verifier records conflicts corresponding to learned infeasible combinations of activation phases, and retains them across runs. We formalize a refinement relation between verification queries and show that conflicts learned for a query remain valid under refinement, enabling sound conflict inheritance. Inherited conflicts are handled using a SAT solver to perform consistency checks and propagation, allowing infeasible subproblems to be detected and pruned early during search. We implement the proposed technique in the Marabou verifier and evaluate it on three verification tasks: local robustness radius determination, verification with input splitting, and minimal sufficient feature set extraction. Our experiments show that incremental conflict reuse reduces verification effort and yields speedups of up to $1.9\times$ over a non-incremental baseline.