Out-of-distribution transfer of PDE foundation models to material dynamics under extreme loading
2026-03-04 • Machine Learning
Machine Learning
AI summaryⓘ
The authors tested how well two advanced models (POSEIDON and MORPH), initially trained on smooth fluid problems, work on much harder tasks involving sudden changes like shocks and material fractures. They focused on predicting the final state of a system from its initial state, without helping the model step-by-step. By comparing models fine-tuned from pretrained weights versus trained from scratch, and using different amounts of data, the authors studied how efficiently these models learn when asked to handle unfamiliar and rough scenarios. This helps understand the limits of current PDE foundation models under challenging material dynamics.
PDE foundation modelspretrainingfine-tuningshock dynamicsmaterial fractureout-of-distribution transferterminal-state predictionsample efficiencymulti-material interfacedynamic fracture
Authors
Mahindra Rautela, Alexander Most, Siddharth Mansingh, Aleksandra Pachalieva, Bradley Love, Daniel O Malley, Alexander Scheinker, Kyle Hickmann, Diane Oyen, Nathan Debardeleben, Earl Lawrence, Ayan Biswas
Abstract
Most PDE foundation models are pretrained and fine-tuned on fluid-centric benchmarks. Their utility under extreme-loading material dynamics remains unclear. We benchmark out-of-distribution transfer on two discontinuity-dominated regimes in which shocks, evolving interfaces, and fracture produce highly non-smooth fields: shock-driven multi-material interface dynamics (perturbed layered interface or PLI) and dynamic fracture/failure evolution (FRAC). We formulate the downstream task as terminal-state prediction, i.e., learning a long-horizon map that predicts the final state directly from the first snapshot without intermediate supervision. Using a unified training and evaluation protocol, we evaluate two open-source pretrained PDE foundation models, POSEIDON and MORPH, and compare fine-tuning from pretrained weights against training from scratch across training-set sizes to quantify sample efficiency under distribution shift.