AI summaryⓘ
The authors developed a method called CODEC to better understand how neural networks process information by looking directly at how hidden neurons contribute to final outputs. Unlike just observing neuron activations, CODEC breaks down these contributions into simple, sparse patterns that reveal cause-and-effect relationships inside the network. They found that as information moves through layers, these contributions become more distinct and separate positive and negative effects on the output. Applying CODEC helped the authors control and visualize how networks combine different image parts to make decisions, and also uncovered detailed neuron interactions in models of the retina. Overall, their approach offers a clearer way to interpret how complex neural networks work internally.
neural networkshidden neuronssparse autoencoderscontribution decompositionnetwork interpretabilityimage classificationcausal analysisreceptive fieldsretina modelnonlinear computation
Authors
Joshua Brendan Melander, Zaki Alaoui, Shenghua Liu, Surya Ganguli, Stephen A. Baccus
Abstract
Understanding how neural networks transform inputs into outputs is crucial for interpreting and manipulating their behavior. Most existing approaches analyze internal representations by identifying hidden-layer activation patterns correlated with human-interpretable concepts. Here we take a direct approach to examine how hidden neurons act to drive network outputs. We introduce CODEC (Contribution Decomposition), a method that uses sparse autoencoders to decompose network behavior into sparse motifs of hidden-neuron contributions, revealing causal processes that cannot be determined by analyzing activations alone. Applying CODEC to benchmark image-classification networks, we find that contributions grow in sparsity and dimensionality across layers and, unexpectedly, that they progressively decorrelate positive and negative effects on network outputs. We further show that decomposing contributions into sparse modes enables greater control and interpretation of intermediate layers, supporting both causal manipulations of network output and human-interpretable visualizations of distinct image components that combine to drive that output. Finally, by analyzing state-of-the-art models of neural activity in the vertebrate retina, we demonstrate that CODEC uncovers combinatorial actions of model interneurons and identifies the sources of dynamic receptive fields. Overall, CODEC provides a rich and interpretable framework for understanding how nonlinear computations evolve across hierarchical layers, establishing contribution modes as an informative unit of analysis for mechanistic insights into artificial neural networks.