Olmix: A Framework for Data Mixing Throughout LM Development
2026-02-12 • Machine Learning
Machine LearningArtificial IntelligenceComputation and Language
AI summaryⓘ
The authors study how to best combine data from different sources when training language models, a process called data mixing. They identify important design choices in mixing methods that improve performance and address practical issues often overlooked. The authors also introduce a technique called mixture reuse, which updates data mixing ratios efficiently when the set of data sources changes during model development. This method saves computing resources and improves model performance compared to not mixing data. Overall, their work helps make data mixing more practical and effective for real-world language model training.
Data mixingLanguage modelsDomain adaptationModel trainingData domainsEmpirical studyMixture reuseCompute efficiencyDownstream tasks
Authors
Mayee F. Chen, Tyler Murray, David Heineman, Matt Jordan, Hannaneh Hajishirzi, Christopher Ré, Luca Soldaini, Kyle Lo
Abstract
Data mixing -- determining the ratios of data from different domains -- is a first-order concern for training language models (LMs). While existing mixing methods show promise, they fall short when applied during real-world LM development. We present Olmix, a framework that addresses two such challenges. First, the configuration space for developing a mixing method is not well understood -- design choices across existing methods lack justification or consensus and overlook practical issues like data constraints. We conduct a comprehensive empirical study of this space, identifying which design choices lead to a strong mixing method. Second, in practice, the domain set evolves throughout LM development as datasets are added, removed, partitioned, and revised -- a problem setting largely unaddressed by existing works, which assume fixed domains. We study how to efficiently recompute the mixture after the domain set is updated, leveraging information from past mixtures. We introduce mixture reuse, a mechanism that reuses existing ratios and recomputes ratios only for domains affected by the update. Over a sequence of five domain-set updates mirroring real-world LM development, mixture reuse matches the performance of fully recomputing the mix after each update with 74% less compute and improves over training without mixing by 11.6% on downstream tasks.