Di-BiLPS: Denoising induced Bidirectional Latent-PDE-Solver under Sparse Observations

2026-05-13Machine Learning

Machine LearningArtificial Intelligence
AI summary

The authors developed Di-BiLPS, a new neural network system to solve partial differential equations (PDEs) even when data inputs are extremely sparse. Their method compresses data into a smaller form, uses a special diffusion process to handle uncertainty, and aligns data representations for better learning. This approach works efficiently, maintains accuracy with very few inputs, and can produce detailed results without extra training. Tests showed it outperforms existing methods while using less computation.

Partial Differential EquationsNeural NetworksVariational AutoencoderLatent SpaceDiffusion ModelsContrastive LearningSparse ObservationsInverse ProblemsSuper-resolutionDenoising Algorithms
Authors
Zhonghao Li, Chaoyu Liu, Qian Zhang
Abstract
Partial differential equations (PDEs) are fundamental for modeling complex natural and physical phenomena. In many real-world applications, however, observational data are extremely sparse, which severely limits the applicability of both classical numerical solvers and existing neural approaches. While neural methods have shown promising results under moderately sparse observations, their inference efficiency at high resolutions is limited, and their accuracy degrades substantially in the extremely sparse regime. In this work, we propose the Di-BiLPS, a unified neural framework that effectively handle both forward and inverse PDE problems under extremely sparse observations. Di-BiLPS combines a variational autoencoder to compress high-dimensional inputs into a compact latent space, a latent diffusion module to model uncertainty, and contrastive learning to align representations. Operating entirely in this latent space, the framework achieves efficient inference while retaining flexible input-output mapping. In addition, we introduce a PDE-informed denoising algorithm based on a variance-preserving diffusion process, which further improves inference efficiency. Extensive experiments on multiple PDE benchmarks demonstrate that Di-BiLPS consistently achieves SOTA performance under extremely sparse inputs (as low as 3%), while substantially reducing computational cost. Moreover, Di-BiLPS enables zero-shot super-resolution, as it allows predictions over continuous spatial-temporal domains.