Neu-PiG: Neural Preconditioned Grids for Fast Dynamic Surface Reconstruction on Long Sequences

2026-02-25Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors developed Neu-PiG, a method to quickly and accurately reconstruct changing 3D shapes over time from unordered sets of points. Instead of slowly adjusting shapes step-by-step or needing complex training, they use a special grid that encodes shape changes all at once and a simple network to decode these into movements. This approach avoids errors building up and runs much faster than older methods, working well on data showing humans and animals moving. Their technique does not require predefined matching points or extra assumptions, making it easier and more reliable for long sequences.

3D surface reconstructionpoint cloudlatent grid encodingdeformation optimizationmultilayer perceptron (MLP)Sobolev preconditioning6-DoF deformationtime modulationtemporally consistentunstructured data
Authors
Julian Kaltheuner, Hannah Dröge, Markus Plack, Patrick Stotko, Reinhard Klein
Abstract
Temporally consistent surface reconstruction of dynamic 3D objects from unstructured point cloud data remains challenging, especially for very long sequences. Existing methods either optimize deformations incrementally, risking drift and requiring long runtimes, or rely on complex learned models that demand category-specific training. We present Neu-PiG, a fast deformation optimization method based on a novel preconditioned latent-grid encoding that distributes spatial features parameterized on the position and normal direction of a keyframe surface. Our method encodes entire deformations across all time steps at various spatial scales into a multi-resolution latent grid, parameterized by the position and normal direction of a reference surface from a single keyframe. This latent representation is then augmented for time modulation and decoded into per-frame 6-DoF deformations via a lightweight multilayer perceptron (MLP). To achieve high-fidelity, drift-free surface reconstructions in seconds, we employ Sobolev preconditioning during gradient-based training of the latent space, completely avoiding the need for any explicit correspondences or further priors. Experiments across diverse human and animal datasets demonstrate that Neu-PiG outperforms state-the-art approaches, offering both superior accuracy and scalability to long sequences while running at least 60x faster than existing training-free methods and achieving inference speeds on the same order as heavy pretrained models.