Semantic Invariance in Agentic AI

2026-03-13Artificial Intelligence

Artificial IntelligenceComputation and Language
AI summary

The authors studied how reliable large language models (LLMs) are when given questions that mean the same thing but are worded differently. They created a testing method that changes problem descriptions in eight ways without changing their meaning, to see if LLMs give consistent answers. Testing across several big models and scientific problems, they found that bigger models are not always more reliable. In fact, a smaller model showed the most consistent reasoning when faced with these reworded inputs. This shows that model size doesn’t guarantee stable understanding.

Large Language ModelsSemantic InvarianceMetamorphic TestingReasoning RobustnessParaphraseModel ScaleMulti-step ReasoningFoundation Models
Authors
I. de Zarzà, J. de Curtò, Jordi Cabot, Pietro Manzoni, Carlos T. Calafate
Abstract
Large Language Models (LLMs) increasingly serve as autonomous reasoning agents in decision support, scientific problem-solving, and multi-agent coordination systems. However, deploying LLM agents in consequential applications requires assurance that their reasoning remains stable under semantically equivalent input variations, a property we term semantic invariance.Standard benchmark evaluations, which assess accuracy on fixed, canonical problem formulations, fail to capture this critical reliability dimension. To address this shortcoming, in this paper we present a metamorphic testing framework for systematically assessing the robustness of LLM reasoning agents, applying eight semantic-preserving transformations (identity, paraphrase, fact reordering, expansion, contraction, academic context, business context, and contrastive formulation) across seven foundation models spanning four distinct architectural families: Hermes (70B, 405B), Qwen3 (30B-A3B, 235B-A22B), DeepSeek-R1, and gpt-oss (20B, 120B). Our evaluation encompasses 19 multi-step reasoning problems across eight scientific domains. The results reveal that model scale does not predict robustness: the smaller Qwen3-30B-A3B achieves the highest stability (79.6% invariant responses, semantic similarity 0.91), while larger models exhibit greater fragility.