Orthogonalized Multimodal Contrastive Learning with Asymmetric Masking for Structured Representations

2026-02-16Machine Learning

Machine Learning
AI summary

The authors address how machines learn from different types of data at once, like images and text, by focusing on three types of information: what is shared between data types, what is unique to each, and what appears only when combining them. They propose a new method called COrAL that separates these information types clearly using special design choices, including masking parts of data to force the model to understand interactions. Their experiments show this approach creates better and more reliable data representations compared to previous methods. Overall, the authors demonstrate that capturing all kinds of multimodal information leads to more stable and complete learning.

multimodal learningself-supervised learningcontrastive learningredundant informationunique informationsynergistic informationorthogonality constraintsasymmetric maskingfeature disentanglementdata representation
Authors
Carolin Cissee, Raneen Younis, Zahra Ahmadi
Abstract
Multimodal learning seeks to integrate information from heterogeneous sources, where signals may be shared across modalities, specific to individual modalities, or emerge only through their interaction. While self-supervised multimodal contrastive learning has achieved remarkable progress, most existing methods predominantly capture redundant cross-modal signals, often neglecting modality-specific (unique) and interaction-driven (synergistic) information. Recent extensions broaden this perspective, yet they either fail to explicitly model synergistic interactions or learn different information components in an entangled manner, leading to incomplete representations and potential information leakage. We introduce \textbf{COrAL}, a principled framework that explicitly and simultaneously preserves redundant, unique, and synergistic information within multimodal representations. COrAL employs a dual-path architecture with orthogonality constraints to disentangle shared and modality-specific features, ensuring a clean separation of information components. To promote synergy modeling, we introduce asymmetric masking with complementary view-specific patterns, compelling the model to infer cross-modal dependencies rather than rely solely on redundant cues. Extensive experiments on synthetic benchmarks and diverse MultiBench datasets demonstrate that COrAL consistently matches or outperforms state-of-the-art methods while exhibiting low performance variance across runs. These results indicate that explicitly modeling the full spectrum of multimodal information yields more stable, reliable, and comprehensive embeddings.