A Dataset is Worth 1 MB
2026-02-26 • Machine Learning
Machine LearningComputer Vision and Pattern Recognition
AI summaryⓘ
The authors address the problem of sending large datasets to many clients, which can be costly. Instead of sharing actual images, their method, PLADA, sends only the labels of images from a big, generic reference dataset that the clients already have. They also create a way to pick just the most relevant images for the task to keep the data small and useful. Their tests show this method can share task information with very little data sent, while still allowing good learning results.
dataset distillationreference datasetpseudo-labelsdata pruningclassification accuracyImageNettask-specific modelspayload sizesemantic relevance
Authors
Elad Kimchi Shoshani, Leeyam Gabay, Yedid Hoshen
Abstract
A dataset server must often distribute the same large payload to many clients, incurring massive communication costs. Since clients frequently operate on diverse hardware and software frameworks, transmitting a pre-trained model is often infeasible; instead, agents require raw data to train their own task-specific models locally. While dataset distillation attempts to compress training signals, current methods struggle to scale to high-resolution data and rarely achieve sufficiently small files. In this paper, we propose Pseudo-Labels as Data (PLADA), a method that completely eliminates pixel transmission. We assume agents are preloaded with a large, generic, unlabeled reference dataset (e.g., ImageNet-1K, ImageNet-21K) and communicate a new task by transmitting only the class labels for specific images. To address the distribution mismatch between the reference and target datasets, we introduce a pruning mechanism that filters the reference dataset to retain only the labels of the most semantically relevant images for the target task. This selection process simultaneously maximizes training efficiency and minimizes transmission payload. Experiments on 10 diverse datasets demonstrate that our approach can transfer task knowledge with a payload of less than 1 MB while retaining high classification accuracy, offering a promising solution for efficient dataset serving.