Scale Can't Overcome Pragmatics: The Impact of Reporting Bias on Vision-Language Reasoning
2026-02-26 • Computation and Language
Computation and LanguageComputer Vision and Pattern Recognition
AI summaryⓘ
The authors find that popular vision-language models (VLMs) struggle with certain reasoning tasks like understanding space, time, negation, and counting because their training data usually leaves out important details people don't mention when describing images. They show that just making models bigger or using more data, even in different languages, doesn't fix this problem. However, when training data includes special annotations that add the missing information, models improve. Their work suggests that carefully choosing and adding the right training information is better than just relying on massive amounts of data.
Vision-Language ModelsReporting BiasSpatial ReasoningTemporal ReasoningNegationCountingTraining Data CurationOpenCLIPLLaVAPragmatics
Authors
Amita Kamath, Jack Hessel, Khyathi Chandu, Jena D. Hwang, Kai-Wei Chang, Ranjay Krishna
Abstract
The lack of reasoning capabilities in Vision-Language Models (VLMs) has remained at the forefront of research discourse. We posit that this behavior stems from a reporting bias in their training data. That is, how people communicate about visual content by default omits tacit information needed to supervise some types of reasoning; e.g., "at the game today!" is a more likely caption than "a photo of 37 people standing behind a field". We investigate the data underlying the popular VLMs OpenCLIP, LLaVA-1.5 and Molmo through the lens of theories from pragmatics, and find that reporting bias results in insufficient representation of four reasoning skills (spatial, temporal, negation, and counting), despite the corpora being of web-scale, and/or synthetically generated. With a set of curated benchmarks, we demonstrate that: (i) VLMs perform poorly on the aforementioned types of reasoning suppressed in the training data by reporting bias; (ii) contrary to popular belief, scaling data size, model size, and to multiple languages does not result in emergence of these skills by default; but, promisingly, (iii) incorporating annotations specifically collected to obtain tacit information is effective. Our findings highlight the need for more intentional training data curation methods, rather than counting on scale for emergence of reasoning capabilities.