Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less
2026-05-07 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors found that using the same optimizer for both pretraining and finetuning large language models helps the model remember what it learned earlier better than switching optimizers or using other methods like LoRA. They call this idea optimizer-model consistency. Their experiments and theory suggest that optimizers influence the model's internal structure, and sticking to the same one helps reduce forgetting during fine-tuning. They also show that the Muon optimizer tends to memorize too much, which can be bad for learning new patterns with limited data compared to AdamW.
optimizerfinetuningpretraininglarge language modelsLoRAAdamWmodel forgettingregularizationweight updatesynthetic language modeling
Authors
Yuxing Liu, Jianyu Wang, Tong Zhang
Abstract
Optimizers play an important role in both pretraining and finetuning stages when training large language models (LLMs). In this paper, we present an observation that full finetuning with the same optimizer as in pretraining achieves a better learning-forgetting tradeoff, i.e., forgetting less while achieving the same or better performance on the new task, than other optimizers and, possibly surprisingly, LoRA, during the supervised finetuning (SFT) stage. We term this phenomenon optimizer-model consistency. To better understand it, through controlled experiments and theoretical analysis, we show that: 1) optimizers can shape the models by having regularization effects on the activations, leading to different landscapes around the pretrained checkpoints; 2) in response to this regularization effect, the weight update in SFT should follow some specific structures to lower forgetting of the knowledge learned in pretraining, which can be obtained by using the same optimizer. Moreover, we specifically compare Muon and AdamW when they are employed throughout the pretraining and SFT stages and find that Muon performs worse when finetuned for reasoning tasks. With a synthetic language modeling experiment, we demonstrate that this can come from Muon's strong tendency towards rote memorization, which may hurt pattern acquisition with a small amount of data, as for SFT.