EffiMiniVLM: A Compact Dual-Encoder Regression Framework
2026-04-03 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors developed a small and efficient model called EffiMiniVLM to predict product quality using only images and text, without relying on user reviews. Their model uses a streamlined image and text encoder and a special loss function that pays more attention to reliable data points, making learning better with less data. Although it is much smaller and uses fewer resources than other models, it performs competitively in predicting product ratings on Amazon data. The authors also show that with more training data, their model can outperform larger, more complex models, proving that it scales well despite its compact size.
multimodal learningvision-language modelsEfficientNetMiniLMregressionweighted Huber losscold-start problemAmazon Reviews datasetmodel efficiencyscalability
Authors
Yin-Loon Khor, Yi-Jie Wong, Yan Chai Hum
Abstract
Predicting product quality from multimodal item information is critical in cold-start scenarios, where user interaction history is unavailable and predictions must rely on images and textual metadata. However, existing vision-language models typically depend on large architectures and/or extensive external datasets, resulting in high computational cost. To address this, we propose EffiMiniVLM, a compact dual-encoder vision-language regression framework that integrates an EfficientNet-B0 image encoder and a MiniLM-based text encoder with a lightweight regression head. To improve training sample efficiency, we introduce a weighted Huber loss that leverages rating counts to emphasize more reliable samples, yielding consistent performance gains. Trained using only 20% of the Amazon Reviews 2023 dataset, the proposed model contains 27.7M parameters and requires 6.8 GFLOPs, yet achieves a CES score of 0.40 with the lowest resource cost in the benchmark. Despite its small size, it remains competitive with significantly larger models, achieving comparable performance while being approximately 4x to 8x more resource-efficient than other top-5 methods and being the only approach that does not use external datasets. Further analysis shows that scaling the data to 40% alone allows our model to overtake other methods, which use larger models and datasets, highlighting strong scalability despite the model's compact design.