Do Large Language Models Understand Data Visualization Rules?

2026-02-23Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors studied whether large language models (LLMs) can check if data visualizations follow important design rules. They used a strict, logical system called Draco as a standard and created a dataset with charts labeled for rule violations. Their tests showed that while top LLMs were good at following instructions and spotting common errors, they struggled with more subtle design issues. The authors also found that explaining rules in plain language helped smaller models improve a lot. Overall, the work shows LLMs can help validate charts but aren't yet as precise as formal rule-based systems.

large language modelsdata visualizationDracoAnswer Set ProgrammingVega-Litevisualization rulesconstraint-based systemsrule violationsnatural language processingchart validation
Authors
Martin Sinnona, Valentin Bonas, Emmanuel Iarussi, Viviana Siless
Abstract
Data visualization rules-derived from decades of research in design and perception-ensure trustworthy chart communication. While prior work has shown that large language models (LLMs) can generate charts or flag misleading figures, it remains unclear whether they can reason about and enforce visualization rules directly. Constraint-based systems such as Draco encode these rules as logical constraints for precise automated checks, but maintaining symbolic encodings requires expert effort, motivating the use of LLMs as flexible rule validators. In this paper, we present the first systematic evaluation of LLMs against visualization rules using hard-verification ground truth derived from Answer Set Programming (ASP). We translated a subset of Draco's constraints into natural-language statements and generated a controlled dataset of 2,000 Vega-Lite specifications annotated with explicit rule violations. LLMs were evaluated on both accuracy in detecting violations and prompt adherence, which measures whether outputs follow the required structured format. Results show that frontier models achieve high adherence (Gemma 3 4B / 27B: 100%, GPT-oss 20B: 98%) and reliably detect common violations (F1 up to 0.82),yet performance drops for subtler perceptual rules (F1 < 0.15 for some categories) and for outputs generated from technical ASP formulations.Translating constraints into natural language improved performance by up to 150% for smaller models. These findings demonstrate the potential of LLMs as flexible, language-driven validators while highlighting their current limitations compared to symbolic solvers.