Structural Causal Bottleneck Models
2026-03-09 • Machine Learning
Machine Learning
AI summaryⓘ
The authors introduce structural causal bottleneck models (SCBMs), which assume that complex causal relationships between large sets of variables depend only on simpler, low-dimensional summaries called bottlenecks. These models help reduce the complexity of causal analysis by focusing on key summaries and can be learned using straightforward algorithms. The authors explore when SCBMs can be uniquely identified, relate them to existing ideas in information theory, and show how to estimate them in practice. They also demonstrate that using these bottlenecks helps improve causal effect estimation, especially when limited data is available across different tasks.
structural causal modelscausal effectdimension reductionbottleneckidentifiabilityinformation bottleneckcausal representation learningcausal abstractiontransfer learning
Authors
Simon Bing, Jonas Wahl, Jakob Runge
Abstract
We introduce structural causal bottleneck models (SCBMs), a novel class of structural causal models. At the core of SCBMs lies the assumption that causal effects between high-dimensional variables only depend on low-dimensional summary statistics, or bottlenecks, of the causes. SCBMs provide a flexible framework for task-specific dimension reduction while being estimable via standard, simple learning algorithms in practice. We analyse identifiability in SCBMs, connect them to information bottlenecks in the sense of Tishby & Zaslavsky (2015), and illustrate how to estimate them experimentally. We also demonstrate the benefit of bottlenecks for effect estimation in low-sample transfer learning settings. We argue that SCBMs provide an alternative to existing causal dimension reduction frameworks like causal representation learning or causal abstraction learning.