V2M-Zero: Zero-Pair Time-Aligned Video-to-Music Generation

2026-03-11Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceMachine LearningMultimediaSound
AI summary

The authors created V2M-Zero, a method to generate music that syncs well with events in videos without needing paired video-music training data. They observed that syncing music to video is about matching the timing and amount of change, not the exact content, so they used separate event patterns from video and music to align timing. Their approach fine-tunes a music model on music patterns and then replaces those patterns with video ones during use, allowing the system to produce time-aligned music from videos. Tests showed their method improves music quality, semantic matching, and timing synchronization compared to models trained on matched video-music pairs.

video-to-music generationtemporal synchronizationevent curvesintra-modal similaritypretrained encoderszero-pair learningtext-to-music modelscross-modal alignmentaudio qualitybeat alignment
Authors
Yan-Bo Lin, Jonah Casebeer, Long Mai, Aniruddha Mahapatra, Gedas Bertasius, Nicholas J. Bryan
Abstract
Generating music that temporally aligns with video events is challenging for existing text-to-music models, which lack fine-grained temporal control. We introduce V2M-Zero, a zero-pair video-to-music generation approach that outputs time-aligned music for video. Our method is motivated by a key observation: temporal synchronization requires matching when and how much change occurs, not what changes. While musical and visual events differ semantically, they exhibit shared temporal structure that can be captured independently within each modality. We capture this structure through event curves computed from intra-modal similarity using pretrained music and video encoders. By measuring temporal change within each modality independently, these curves provide comparable representations across modalities. This enables a simple training strategy: fine-tune a text-to-music model on music-event curves, then substitute video-event curves at inference without cross-modal training or paired data. Across OES-Pub, MovieGenBench-Music, and AIST++, V2M-Zero achieves substantial gains over paired-data baselines: 5-21% higher audio quality, 13-15% better semantic alignment, 21-52% improved temporal synchronization, and 28% higher beat alignment on dance videos. We find similar results via a large crowd-source subjective listening test. Overall, our results validate that temporal alignment through within-modality features, rather than paired cross-modal supervision, is effective for video-to-music generation. Results are available at https://genjib.github.io/v2m_zero/