WikiCLIP: An Efficient Contrastive Baseline for Open-domain Visual Entity Recognition

2026-03-10Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors worked on a way to recognize objects in images by linking them to real-world entities like Wikipedia entries. They created WikiCLIP, which uses language and image information together to better match images with the right labels, while being faster than previous methods. Their approach also includes special tricks to make it easier to tell very similar things apart. Tests showed that WikiCLIP is much more accurate and faster than other strong methods on popular image recognition tasks.

Visual Entity RecognitionOpen-domain RecognitionKnowledge BasesLarge Language ModelsContrastive LearningVision-Guided Knowledge AdaptorHard Negative SamplingAutoVEROVEN Dataset
Authors
Shan Ning, Longtian Qiu, Jiaxuan Sun, Xuming He
Abstract
Open-domain visual entity recognition (VER) seeks to associate images with entities in encyclopedic knowledge bases such as Wikipedia. Recent generative methods tailored for VER demonstrate strong performance but incur high computational costs, limiting their scalability and practical deployment. In this work, we revisit the contrastive paradigm for VER and introduce WikiCLIP, a simple yet effective framework that establishes a strong and efficient baseline for open-domain VER. WikiCLIP leverages large language model embeddings as knowledge-rich entity representations and enhances them with a Vision-Guided Knowledge Adaptor (VGKA) that aligns textual semantics with visual cues at the patch level. To further encourage fine-grained discrimination, a Hard Negative Synthesis Mechanism generates visually similar but semantically distinct negatives during training. Experimental results on popular open-domain VER benchmarks, such as OVEN, demonstrate that WikiCLIP significantly outperforms strong baselines. Specifically, WikiCLIP achieves a 16% improvement on the challenging OVEN unseen set, while reducing inference latency by nearly 100 times compared with the leading generative model, AutoVER. The project page is available at https://artanic30.github.io/project_pages/WikiCLIP/