From Syntax to Emotion: A Mechanistic Analysis of Emotion Inference in LLMs
2026-04-28 • Computation and Language
Computation and Language
AI summaryⓘ
The authors studied how large language models understand emotions by looking inside the models using a technique called sparse autoencoders. They found that emotion-related information only appears in the later parts of the model's processing and is made up of both common features shared across emotions and unique features for each emotion. They also discovered some features have a strong effect on predicting emotions, with some emotions like Disgust being represented less clearly. Finally, the authors developed a method to improve emotion recognition in these models without hurting their general language skills, and this improvement worked across different tests.
large language modelsemotion recognitionsparse autoencodersfeature activationscausal tracingemotion representationdisgustmodel interpretabilitycausal feature steering
Authors
Bangzhao Shu, Arinjay Singh, Mai ElSherief
Abstract
Large language models (LLMs) are increasingly used in emotionally sensitive human-AI applications, yet little is known about how emotion recognition is internally represented. In this work, we investigate the internal mechanisms of emotion recognition in LLMs using sparse autoencoders (SAEs). By analyzing sparse feature activations across layers, we identify a consistent three-phase information flow, in which emotion-related features emerge only in the final phase. We further show that emotion representations comprise both shared features across emotions and emotion-specific features. Using phase-stratified causal tracing, we identify a small set of features that strongly influence emotion predictions, and show that both their number and causal impact vary across emotions; in particular, Disgust is more weakly and diffusely represented than other emotions. Finally, we propose an interpretable and data-efficient causal feature steering method that significantly improves emotion recognition performance across multiple models while largely preserving language modeling ability, and demonstrate that these improvements generalize across multiple emotion recognition datasets. Overall, our findings provide a systematic analysis of the internal mechanisms underlying emotion recognition in LLMs and introduce an efficient, interpretable, and controllable approach for improving model performance.