ThermEval: A Structured Benchmark for Evaluation of Vision-Language Models on Thermal Imagery
2026-02-16 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summaryⓘ
The authors created a new benchmark called ThermEval-B to test how well vision language models (VLMs) understand thermal images, which show temperature instead of color. They combined existing datasets with a new one that includes detailed temperature maps and body-part labels in different environments. When they tested 25 different models, the authors found that these models struggled to reason based on temperature and often relied on guesswork instead of the image data. The work shows that models trained on regular color images don't work well for thermal images and highlights the need for specialized tools and tests in this area.
Vision Language Models (VLMs)Thermal ImagingVisual Question AnsweringTemperature MapsSemantic SegmentationRGB ImageryModel EvaluationBenchmark DatasetPromptingFine-Tuning
Authors
Ayush Shrivastava, Kirtan Gangani, Laksh Jain, Mayank Goel, Nipun Batra
Abstract
Vision language models (VLMs) achieve strong performance on RGB imagery, but they do not generalize to thermal images. Thermal sensing plays a critical role in settings where visible light fails, including nighttime surveillance, search and rescue, autonomous driving, and medical screening. Unlike RGB imagery, thermal images encode physical temperature rather than color or texture, requiring perceptual and reasoning capabilities that existing RGB-centric benchmarks do not evaluate. We introduce ThermEval-B, a structured benchmark of approximately 55,000 thermal visual question answering pairs designed to assess the foundational primitives required for thermal vision language understanding. ThermEval-B integrates public datasets with our newly collected ThermEval-D, the first dataset to provide dense per-pixel temperature maps with semantic body-part annotations across diverse indoor and outdoor environments. Evaluating 25 open-source and closed-source VLMs, we find that models consistently fail at temperature-grounded reasoning, degrade under colormap transformations, and default to language priors or fixed responses, with only marginal gains from prompting or supervised fine-tuning. These results demonstrate that thermal understanding requires dedicated evaluation beyond RGB-centric assumptions, positioning ThermEval as a benchmark to drive progress in thermal vision language modeling.