Repurposing Geometric Foundation Models for Multi-view Diffusion

2026-03-23Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors studied how to create images of objects from new angles (novel view synthesis) and found that the usual way of creating these images doesn't keep the 3D shape consistent. They introduced a new method called Geometric Latent Diffusion (GLD) that uses a special feature space already good at understanding 3D shapes, which helps generate images that look better and match across different views. Their method is faster to train and gives higher quality images than previous methods that rely on a common type of model called a VAE. Even without extra training on huge image-text datasets, GLD competes well with top methods.

novel view synthesislatent spacegeometric consistencydiffusion modelgeometric foundation modelsvariational autoencoder (VAE)RGB reconstructionmulti-view generationtraining accelerationcross-view correspondence
Authors
Wooseok Jang, Seonghu Jeon, Jisang Han, Jinhyeok Choi, Minkyung Kwon, Seungryong Kim, Saining Xie, Sainan Liu
Abstract
While recent advances in generative latent spaces have driven substantial progress in single-image generation, the optimal latent space for novel view synthesis (NVS) remains largely unexplored. In particular, NVS requires geometrically consistent generation across viewpoints, but existing approaches typically operate in a view-independent VAE latent space. In this paper, we propose Geometric Latent Diffusion (GLD), a framework that repurposes the geometrically consistent feature space of geometric foundation models as the latent space for multi-view diffusion. We show that these features not only support high-fidelity RGB reconstruction but also encode strong cross-view geometric correspondences, providing a well-suited latent space for NVS. Our experiments demonstrate that GLD outperforms both VAE and RAE on 2D image quality and 3D consistency metrics, while accelerating training by more than 4.4x compared to the VAE latent space. Notably, GLD remains competitive with state-of-the-art methods that leverage large-scale text-to-image pretraining, despite training its diffusion model from scratch without such generative pretraining.