AnchorSeg: Language Grounded Query Banks for Reasoning Segmentation
2026-04-20 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors address the challenge of reasoning segmentation, where the goal is to identify precise areas in an image based on complex text queries. They propose AnchorSeg, a method that separates semantic understanding from spatial localization by using multiple queries instead of just one combined token. Their approach models spatial information with a special anchor token and semantic states with other tokens, improving clarity in the segmentation process. They also introduce a training technique called Token–Mask Cycle Consistency to better align token predictions with pixel-level labels. This method achieves improved accuracy on the ReasonSeg dataset.
reasoning segmentationsemantic reasoningspatial localizationquery tokensconditional generationimage tokensToken–Mask Cycle Consistencypixel-level maskReasonSeg datasetgIoU
Authors
Rui Qian, Chuanhang Deng, Qiang Huang, Jian Xiong, Mingxuan Li, Yingbo Zhou, Wei Zhai, Jintao Chen, Dejing Dou
Abstract
Reasoning segmentation requires models to ground complex, implicit textual queries into precise pixel-level masks. Existing approaches rely on a single segmentation token $\texttt{<SEG>}$, whose hidden state implicitly encodes both semantic reasoning and spatial localization, limiting the model's ability to explicitly disentangle what to segment from where to segment. We introduce AnchorSeg, which reformulates reasoning segmentation as a structured conditional generation process over image tokens, conditioned on language grounded query banks. Instead of compressing all semantic reasoning and spatial localization into a single embedding, AnchorSeg constructs an ordered sequence of query banks: latent reasoning tokens that capture intermediate semantic states, and a segmentation anchor token that provides explicit spatial grounding. We model spatial conditioning as a factorized distribution over image tokens, where the anchor query determines localization signals while contextual queries provide semantic modulation. To bridge token-level predictions and pixel-level supervision, we propose Token--Mask Cycle Consistency (TMCC), a bidirectional training objective that enforces alignment across resolutions. By explicitly decoupling spatial grounding from semantic reasoning through structured language grounded query banks, AnchorSeg achieves state-of-the-art results on ReasonSeg test set (67.7\% gIoU and 68.1\% cIoU). All code and models are publicly available at https://github.com/rui-qian/AnchorSeg.