Multimodal Large Language Models as Image Classifiers
2026-03-06 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors found that how researchers test multimodal language models (MLLMs) greatly affects the reported accuracy, often making the models look worse or better than they really are. They fixed common testing problems like ignoring unexpected model answers and using weak multiple-choice options. By improving the labels in a large image dataset, the authors showed MLLMs perform closer to supervised models than previously thought. They also discovered that MLLMs can help human reviewers improve annotation quality, especially for hard examples.
Multimodal Large Language Models (MLLM)Evaluation ProtocolGround TruthImageNet-1kSupervised LearningAnnotation QualityBatch SizeImage ClassificationVision-Language Models
Authors
Nikita Kisel, Illia Volkov, Klara Janouskova, Jiri Matas
Abstract
Multimodal Large Language Models (MLLM) classification performance depends critically on evaluation protocol and ground truth quality. Studies comparing MLLMs with supervised and vision-language models report conflicting conclusions, and we show these conflicts stem from protocols that either inflate or underestimate performance. Across the most common evaluation protocols, we identify and fix key issues: model outputs that fall outside the provided class list and are discarded, inflated results from weak multiple-choice distractors, and an open-world setting that underperforms only due to poor output mapping. We additionally quantify the impact of commonly overlooked design choices - batch size, image ordering, and text encoder selection - showing they substantially affect accuracy. Evaluating on ReGT, our multilabel reannotation of 625 ImageNet-1k classes, reveals that MLLMs benefit most from corrected labels (up to +10.8%), substantially narrowing the perceived gap with supervised models. Much of the reported MLLMs underperformance on classification is thus an artifact of noisy ground truth and flawed evaluation protocol rather than genuine model deficiency. Models less reliant on supervised training signals prove most sensitive to annotation quality. Finally, we show that MLLMs can assist human annotators: in a controlled case study, annotators confirmed or integrated MLLMs predictions in approximately 50% of difficult cases, demonstrating their potential for large-scale dataset curation.