What Drives Representation Steering? A Mechanistic Case Study on Steering Refusal

2026-04-09Machine Learning

Machine LearningArtificial IntelligenceComputation and Language
AI summary

The authors studied how steering vectors change the behavior of large language models to understand why this method works. They found that steering vectors mostly affect the model's attention mechanism, especially through something called the OV circuit, while ignoring another part called the QK circuit. By examining these circuits, the authors discovered they can simplify the steering vectors a lot without losing much performance. This means steering vectors work by tweaking specific internal components in a way that can be explained and optimized.

steering vectorslarge language modelsmodel alignmentattention mechanismOV circuitQK circuitactivation patchingcircuit analysissparsificationmodel interpretability
Authors
Stephen Cheng, Sarah Wiegreffe, Dinesh Manocha
Abstract
Applying steering vectors to large language models (LLMs) is an efficient and effective model alignment technique, but we lack an interpretable explanation for how it works-- specifically, what internal mechanisms steering vectors affect and how this results in different model outputs. To investigate the causal mechanisms underlying the effectiveness of steering vectors, we conduct a comprehensive case study on refusal. We propose a multi-token activation patching framework and discover that different steering methodologies leverage functionally interchangeable circuits when applied at the same layer. These circuits reveal that steering vectors primarily interact with the attention mechanism through the OV circuit while largely ignoring the QK circuit-- freezing all attention scores during steering drops performance by only 8.75% across two model families. A mathematical decomposition of the steered OV circuit further reveals semantically interpretable concepts, even in cases where the steering vector itself does not. Leveraging the activation patching results, we show that steering vectors can be sparsified by up to 90-99% while retaining most performance, and that different steering methodologies agree on a subset of important dimensions.