LLMTailor: A Layer-wise Tailoring Tool for Efficient Checkpointing of Large Language Models

2026-02-25Distributed, Parallel, and Cluster Computing

Distributed, Parallel, and Cluster Computing
AI summary

The authors study ways to save progress during the training of large language models without saving everything all the time, which takes up a lot of space and time. They discovered that not all parts of the model change equally during training, so saving only the parts that change a lot could help. To do this, they created a tool called LLMTailor that combines different saved parts into a new checkpoint. Their tests show that this method saves space and speed without hurting the model's learning.

checkpointinglarge language modelsfault tolerancemodel trainingoptimizer stateslayer updatesstorage overheadcheckpoint mergingLLMTailorselective checkpointing
Authors
Minqiu Sun, Xin Huang, Luanzheng Guo, Nathan R. Tallent, Kento Sato, Dong Dai
Abstract
Checkpointing is essential for fault tolerance in training large language models (LLMs). However, existing methods, regardless of their I/O strategies, periodically store the entire model and optimizer states, incurring substantial storage overhead and resource contention. Recent studies reveal that updates across LLM layers are highly non-uniform. Across training steps, some layers may undergo more significant changes, while others remain relatively stable or even unchanged. This suggests that selectively checkpointing only layers with significant updates could reduce overhead without harming training. Implementing such selective strategies requires fine-grained control over both weights and optimizer states, which no current tool provides. To address this gap, we propose \texttt{LLMTailor}, a checkpoint-merging framework that filters and assembles layers from different checkpoints to form a composite checkpoint. Our evaluation indicates that LLMTailor can work with different selective checkpointing strategies and effectively reduce checkpoint size (e.g., 4.3 times smaller for Llama3.1-8B) and checkpoint time (e.g., 2.8 times faster for Qwen2.5-7B) while maintaining model quality.