LoST: Level of Semantics Tokenization for 3D Shapes
2026-03-18 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionGraphicsMachine Learning
AI summaryⓘ
The authors address how 3D shapes are broken down into tokens for generative models, especially autoregressive (AR) ones. They note that current methods use geometric hierarchies which are not very efficient or semantically meaningful. Their method, called Level-of-Semantics Tokenization (LoST), arranges tokens by semantic importance so that early tokens capture the main shape meaning and later tokens add detailed refinements. They also introduce a new training loss (RIDA) to align 3D shape features with semantic features. Their experiments show LoST improves reconstruction quality and efficiency compared to previous methods.
TokenizationAutoregressive models3D shape generationLevel-of-detail (LoD)Semantic coherenceLatent spaceDINO featuresReconstruction metricsRelational alignment lossSemantic retrieval
Authors
Niladri Shekhar Dutt, Zifan Shi, Paul Guerrero, Chun-Hao Paul Huang, Duygu Ceylan, Niloy J. Mitra, Xuelin Chen
Abstract
Tokenization is a fundamental technique in the generative modeling of various modalities. In particular, it plays a critical role in autoregressive (AR) models, which have recently emerged as a compelling option for 3D generation. However, optimal tokenization of 3D shapes remains an open question. State-of-the-art (SOTA) methods primarily rely on geometric level-of-detail (LoD) hierarchies, originally designed for rendering and compression. These spatial hierarchies are often token-inefficient and lack semantic coherence for AR modeling. We propose Level-of-Semantics Tokenization (LoST), which orders tokens by semantic salience, such that early prefixes decode into complete, plausible shapes that possess principal semantics, while subsequent tokens refine instance-specific geometric and semantic details. To train LoST, we introduce Relational Inter-Distance Alignment (RIDA), a novel 3D semantic alignment loss that aligns the relational structure of the 3D shape latent space with that of the semantic DINO feature space. Experiments show that LoST achieves SOTA reconstruction, surpassing previous LoD-based 3D shape tokenizers by large margins on both geometric and semantic reconstruction metrics. Moreover, LoST achieves efficient, high-quality AR 3D generation and enables downstream tasks like semantic retrieval, while using only 0.1%-10% of the tokens needed by prior AR models.