Utonia: Toward One Encoder for All Point Clouds

2026-03-03Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors introduce Utonia, a model that learns to understand 3D point clouds from many different sources, like drones, indoor scanners, and video cameras, all in one system. Even though these sources produce very different types of data, Utonia finds a way to represent them consistently so it works well across all domains. The authors found that training on mixed data helps the model learn new abilities, improving tasks like robot manipulation and spatial reasoning in vision-language systems. Their work is a first step toward general-purpose models for 3D data useful in areas like robotics and augmented reality.

point cloudsself-supervised learningtransformer encoder3D representationLiDARRGB-Dvision-language modelsrobotic manipulationmultimodal reasoningfoundation models
Authors
Yujia Zhang, Xiaoyang Wu, Yunhan Yang, Xianzhe Fan, Han Li, Yuechen Zhang, Zehao Huang, Naiyan Wang, Hengshuang Zhao
Abstract
We dream of a future where point clouds from all domains can come together to shape a single model that benefits them all. Toward this goal, we present Utonia, a first step toward training a single self-supervised point transformer encoder across diverse domains, spanning remote sensing, outdoor LiDAR, indoor RGB-D sequences, object-centric CAD models, and point clouds lifted from RGB-only videos. Despite their distinct sensing geometries, densities, and priors, Utonia learns a consistent representation space that transfers across domains. This unification improves perception capability while revealing intriguing emergent behaviors that arise only when domains are trained jointly. Beyond perception, we observe that Utonia representations can also benefit embodied and multimodal reasoning: conditioning vision-language-action policies on Utonia features improves robotic manipulation, and integrating them into vision-language models yields gains on spatial reasoning. We hope Utonia can serve as a step toward foundation models for sparse 3D data, and support downstream applications in AR/VR, robotics, and autonomous driving.