Neural Scaling Laws for Boosted Jet Tagging

2026-02-17Machine Learning

Machine Learning
AI summary

The authors studied how making bigger and more complex machine learning models affects performance in analyzing particle physics data. They focused on a task called boosted jet classification using the JetClass dataset. They found that more computing power generally improves results up to a certain limit, and using more detailed input features can raise that limit. They also looked at how repeating data, common when simulations are costly, changes how much benefit you get from more data. Their work helps understand the best ways to allocate computing resources for particle physics machine learning tasks.

Large Language ModelsNeural Scaling LawsBoosted Jet ClassificationJetClass DatasetCompute Optimal ScalingHigh Energy PhysicsData RepetitionParticle MultiplicityModel CapacityAsymptotic Performance
Authors
Matthias Vigl, Nicole Hartman, Michael Kagan, Lukas Heinrich
Abstract
The success of Large Language Models (LLMs) has established that scaling compute, through joint increases in model capacity and dataset size, is the primary driver of performance in modern machine learning. While machine learning has long been an integral component of High Energy Physics (HEP) data analysis workflows, the compute used to train state-of-the-art HEP models remains orders of magnitude below that of industry foundation models. With scaling laws only beginning to be studied in the field, we investigate neural scaling laws for boosted jet classification using the public JetClass dataset. We derive compute optimal scaling laws and identify an effective performance limit that can be consistently approached through increased compute. We study how data repetition, common in HEP where simulation is expensive, modifies the scaling yielding a quantifiable effective dataset size gain. We then study how the scaling coefficients and asymptotic performance limits vary with the choice of input features and particle multiplicity, demonstrating that increased compute reliably drives performance toward an asymptotic limit, and that more expressive, lower-level features can raise the performance limit and improve results at fixed dataset size.