Crafting Reversible SFT Behaviors in Large Language Models

2026-05-07Machine Learning

Machine Learning
AI summary

The authors study how to isolate specific new behaviors taught to large language models during fine-tuning into small, essential parts of the model called carriers. They introduce a method called Loss-Constrained Dual Descent (LCDD) to find these carriers and a tool named SFT-Eraser to turn off the behavior without changing the model's weights. Their experiments show that these carriers are necessary for the behaviors and can be selectively controlled, unlike previous approaches that only found correlations but not causality. This work helps understand and manage behaviors added by fine-tuning more precisely.

Supervised Fine-Tuning (SFT)Large Language Models (LLMs)SubnetworkCircuit AttributionCausal NecessityLoss-Constrained Dual Descent (LCDD)Soft PromptingActivation MatchingBehavior ReversionModel Interpretability
Authors
Yuping Lin, Pengfei He, Yue Xing, Yingqian Cui, Jiayuan Ding, Subhabrata Mukherjee, Hui Liu, Zhen Xiang
Abstract
Supervised fine-tuning (SFT) induces new behaviors in large language models, yet imposes no structural constraint on how these behaviors are distributed within the model. Existing behavior interpretation methods, such as circuit attribution approaches, identify sparse subnetworks correlated with SFT-induced behaviors post-hoc. However, such correlations do not imply *causal necessity*, limiting the ability to selectively control SFT-induced behaviors at inference time. We pursue an alternative by asking: can an SFT-induced behavior be deliberately compressed into a sparse, mechanistically necessary subnetwork, termed a *carrier*, while remaining controllable at inference time without weight modification? We propose (a) **Loss-Constrained Dual Descent (LCDD)**, which constructs such carriers by jointly optimizing routing masks and model weights under an explicit utility budget, and (b) **SFT-Eraser**, a soft prompt optimized via activation matching on extracted carrier channels, to reverse the SFT-induced behavior. Across safety, fixed-response, and style behaviors on multiple model families, LCDD yields sparse carriers that preserve target behaviors while enabling strong reversion when triggered by SFT-Eraser. Ablations further establish that the sparse structure is the key precondition for reversal: the same trigger optimization fails on standard SFT models, confirming that structure rather than trigger design is the operative factor. These results provide direct evidence that the learned carriers are causally necessary for the behaviors, pointing to a new direction for systematically localizing and selectively suppressing SFT-induced behaviors in deployed models.