Training-Free Semantic Multi-Object Tracking with Vision-Language Models
2026-04-15 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors propose TF-SMOT, a new method that tracks multiple objects in videos and adds meaningful descriptions without needing extra training. Instead of training a whole system, TF-SMOT uses existing pre-trained tools to identify objects, follow them over time, and generate captions and summaries. Their method works well on a benchmark called BenSMOT, improving tracking and descriptions compared to previous methods. However, recognizing detailed interactions between objects remains tough, partly because of challenges in matching precise and fine-grained labels.
Semantic Multi-Object Tracking (SMOT)pretrained modelsobject detectionmask-based trackingvideo-language generationtrackletsInternVideo2.5WordNetsemantic retrievallarge language models (LLM)
Authors
Laurence Bonat, Francesco Tonini, Elisa Ricci, Lorenzo Vaquero
Abstract
Semantic Multi-Object Tracking (SMOT) extends multi-object tracking with semantic outputs such as video summaries, instance-level captions, and interaction labels, aiming to move from trajectories to human-interpretable descriptions of dynamic scenes. Existing SMOT systems are trained end-to-end, coupling progress to expensive supervision, limiting the ability to rapidly adapt to new foundation models and new interactions. We propose TF-SMOT, a training-free SMOT pipeline that composes pretrained components for detection, mask-based tracking, and video-language generation. TF-SMOT combines D-FINE and the promptable SAM2 segmentation tracker to produce temporally consistent tracklets, uses contour grounding to generate video summaries and instance captions with InternVideo2.5, and aligns extracted interaction predicates to BenSMOT WordNet synsets via gloss-based semantic retrieval with LLM disambiguation. On BenSMOT, TF-SMOT achieves state-of-the-art tracking performance within the SMOT setting and improves summary and caption quality compared to prior art. Interaction recognition, however, remains challenging under strict exact-match evaluation on the fine-grained and long-tailed WordNet label space; our analysis and ablations indicate that semantic overlap and label granularity substantially affect measured performance.