AV-Unified: A Unified Framework for Audio-visual Scene Understanding

2026-03-06Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed AV-Unified, a single system that can learn and understand many different types of audio-visual tasks together, instead of separately. They convert all tasks into a common format made of sequences of tokens, allowing one model to handle different jobs across various datasets. Their system uses special parts to focus on timing differences in audio-visual events and to link sounds and visuals spatially, even when audio guidance is missing. They also use text prompts to help the model recognize each specific task. Tests show AV-Unified works well on tasks involving when and where sounds happen in videos.

audio-visual scene understandingevent localizationmulti-scale temporal perceptioncross-modal learningspatiotemporal perceptiondiscrete tokenstask-specific promptsmultimodal datasetsaudio-visual association
Authors
Guangyao Li, Xin Wang, Wenwu Zhu
Abstract
When humans perceive the world, they naturally integrate multiple audio-visual tasks within dynamic, real-world scenes. However, current works such as event localization, parsing, segmentation and question answering are mostly explored individually, making it challenging to comprehensively understand complex audio-visual scenes and explore inter-task relationships. Hence, we propose \textbf{AV-Unified}, a unified framework that enables joint learning across a wide range of audio-visual scene understanding tasks. AV-Unified standardizes the diverse input-output formats of each task and incorporates a multi-scale spatiotemporal perception network to effectively capture audio-visual associations. Specifically, we unify the inputs and outputs of all supported tasks by converting them into sequences of discrete tokens, establishing a shared representation that allows a single architecture to be trained jointly across heterogeneous varied datasets. Considering the varying temporal granularity of audio-visual events, a multi-scale temporal perception module is designed to capture key cues. Meanwhile, to overcome the lack of auditory supervision in the visual domain, we design a cross-modal guidance-based spatial perception module that models spatial audio-visual associations. Furthermore, task-specific text prompts are employed to enhance the model's adaptability and task-awareness. Extensive experiments on benchmark datasets (e.g., AVE, LLP, MUSIC-AVQA, VGG-SS and AVS) demonstrate the effectiveness of AV-Unified across temporal, spatial, and spatiotemporal tasks.