No Hard Negatives Required: Concept Centric Learning Leads to Compositionality without Degrading Zero-shot Capabilities of Contrastive Models

2026-03-26Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionMachine Learning
AI summary

The authors explain that contrastive vision-language models struggle to understand combinations of concepts correctly. Instead of creating special training examples, they focus on two main problems: long captions that don't force the model to learn compositions, and pooling methods that lose detailed information. They fix this by breaking captions into shorter meaningful parts and using a new way to focus on related image parts without adding complexity. Their approach improves understanding of compositions while keeping other abilities strong.

contrastive learningvision-language modelscompositionalityhard negativescaptioningglobal poolingcross-modal attentionzero-shot learningimage encoderNLP parsing
Authors
Hai X. Pham, David T. Hoffmann, Ricardo Guerrero, Brais Martinez
Abstract
Contrastive vision-language (V&L) models remain a popular choice for various applications. However, several limitations have emerged, most notably the limited ability of V&L models to learn compositional representations. Prior methods often addressed this limitation by generating custom training data to obtain hard negative samples. Hard negatives have been shown to improve performance on compositionality tasks, but are often specific to a single benchmark, do not generalize, and can cause substantial degradation of basic V&L capabilities such as zero-shot or retrieval performance, rendering them impractical. In this work we follow a different approach. We identify two root causes that limit compositionality performance of V&Ls: 1) Long training captions do not require a compositional representation; and 2) The final global pooling in the text and image encoders lead to a complete loss of the necessary information to learn binding in the first place. As a remedy, we propose two simple solutions: 1) We obtain short concept centric caption parts using standard NLP software and align those with the image; and 2) We introduce a parameter-free cross-modal attention-pooling to obtain concept centric visual embeddings from the image encoder. With these two changes and simple auxiliary contrastive losses, we obtain SOTA performance on standard compositionality benchmarks, while maintaining or improving strong zero-shot and retrieval capabilities. This is achieved without increasing inference cost. We release the code for this work at https://github.com/SamsungLabs/concept_centric_clip.