Who Guards the Guardians? The Challenges of Evaluating Identifiability of Learned Representations

2026-02-27Machine Learning

Machine Learning
AI summary

The authors explain that common ways to check if representation learning methods can find the true underlying factors only work if certain conditions are met. These checks rely on hidden assumptions about how the data was generated and how the method works. When those assumptions don't hold, the checks can wrongly say a method is good or bad. The authors organize these assumptions into clear categories and provide tools to better test these methods in different situations.

identifiabilityrepresentation learningdata-generating processencoderevaluation metricsMCCDCIR^2benchmarking
Authors
Shruti Joshi, Théo Saulus, Wieland Brendel, Philippe Brouillard, Dhanya Sridhar, Patrik Reizinger
Abstract
Identifiability in representation learning is commonly evaluated using standard metrics (e.g., MCC, DCI, R^2) on synthetic benchmarks with known ground-truth factors. These metrics are assumed to reflect recovery up to the equivalence class guaranteed by identifiability theory. We show that this assumption holds only under specific structural conditions: each metric implicitly encodes assumptions about both the data-generating process (DGP) and the encoder. When these assumptions are violated, metrics become misspecified and can produce systematic false positives and false negatives. Such failures occur both within classical identifiability regimes and in post-hoc settings where identifiability is most needed. We introduce a taxonomy separating DGP assumptions from encoder geometry, use it to characterise the validity domains of existing metrics, and release an evaluation suite for reproducible stress testing and comparison.