Back into Plato's Cave: Examining Cross-modal Representational Convergence at Scale

2026-04-20Computer Vision and Pattern Recognition

Computer Vision and Pattern RecognitionArtificial IntelligenceMachine Learning
AI summary

The authors examine the idea that neural networks trained on different types of data, like images and text, might develop very similar internal understandings of the world. They find that previous studies supporting this idea rely on small datasets and specific testing methods that don't hold up with larger, more realistic datasets. When tested more broadly, the similarity between models across modalities is weaker and only captures general themes, not detailed matches. They also show that newer language models don't necessarily align better with vision models, challenging earlier claims. Overall, the authors suggest that different types of models may learn useful representations but not the same one.

Neural networksCross-modal representationMutual nearest neighborsModalitiesSemantic alignmentLanguage modelsVision modelsDataset scalingRepresentation learningMany-to-many matching
Authors
A. Sophia Koepke, Daniil Zverev, Shiry Ginosar, Alexei A. Efros
Abstract
The Platonic Representation Hypothesis suggests that neural networks trained on different modalities (e.g., text and images) align and eventually converge toward the same representation of reality. If true, this has significant implications for whether modality choice matters at all. We show that the experimental evidence for this hypothesis is fragile and depends critically on the evaluation regime. Alignment is measured using mutual nearest neighbors on small datasets ($\approx$1K samples) and degrades substantially as the dataset is scaled to millions of samples. The alignment that remains between model representations reflects coarse semantic overlap rather than consistent fine-grained structure. Moreover, the evaluations in Huh et al. are done in a one-to-one image-caption setting, a constraint that breaks down in realistic many-to-many settings and further reduces alignment. We also find that the reported trend of stronger language models increasingly aligning with vision does not appear to hold for newer models. Overall, our findings suggest that the current evidence for cross-modal representational convergence is considerably weaker than subsequent works have taken it to be. Models trained on different modalities may learn equally rich representations of the world, just not the same one.