True (VIS) Lies: Analyzing How Generative AI Recognizes Intentionality, Rhetoric, and Misleadingness in Visualization Lies

2026-04-01Human-Computer Interaction

Human-Computer InteractionComputation and LanguageComputer Vision and Pattern Recognition
AI summary

The authors studied if large language models can spot and understand misleading pictures in COVID-19 tweets and explain why they might be misleading, including if it was on purpose. They used a big set of tweets with half containing tricky visuals and compared 16 different language models of various sizes. They also asked human experts to see how people judge these misleading visuals compared to the models. By comparing both, they found where the models think like humans and where they do not. This helps understand the strengths and limits of AI in detecting deceptive graphs.

Multimodal Large Language ModelsMisleading VisualizationsVisualization RhetoricAuthorial IntentionsCOVID-19 DataPerceptual ErrorsCognitive BiasConceptual ErrorsUser StudyComputer Vision
Authors
Graziano Blasilli, Marco Angelini
Abstract
This study investigates the ability of multimodal Large Language Models (LLMs) to identify and interpret misleading visualizations, and recognize these observations along with their underlying causes and potential intentionality. Our analysis leverages concepts from visualization rhetoric and a newly developed taxonomy of authorial intents as explanatory lenses. We formulated three research questions and addressed them experimentally using a dataset of 2,336 COVID-19-related tweets, half of which contain misleading visualizations, and supplemented it with real-world examples of perceptual, cognitive, and conceptual errors drawn from VisLies, the IEEE VIS community event dedicated to showcasing deceptive and misleading visualizations. To ensure broad coverage of the current LLM landscape, we evaluated 16 state-of-the-art models. Among them, 15 are open-weight models, spanning a wide range of model sizes, architectural families, and reasoning capabilities. The selection comprises small models, namely Nemotron-Nano-V2-VL (12B parameters), Mistral-Small-3.2 (24B), DeepSeek-VL2 (27B), Gemma3 (27B), and GTA1 (32B); medium-sized models, namely Qianfan-VL (70B), Molmo (72B), GLM-4.5V (108B), LLaVA-NeXT (110B), and Pixtral-Large (124B); and large models, namely Qwen3-VL (235B), InternVL3.5 (241B), Step3 (321B), Llama-4-Maverick (400B), and Kimi-K2.5 (1000B). In addition, we employed OpenAI GPT-5.4, a frontier proprietary model. To establish a human perspective on these tasks, we also conducted a user study with visualization experts to assess how people perceive rhetorical techniques and the authorial intentions behind the same misleading visualizations. This allows comparison between model and expert behavior, revealing similarities and differences that provide insights into where LLMs align with human judgment and where they diverge.