3D-Layout-R1: Structured Reasoning for Language-Instructed Spatial Editing
2026-03-23 • Computer Vision and Pattern Recognition
Computer Vision and Pattern RecognitionArtificial Intelligence
AI summaryⓘ
The authors found that big AI models, which understand language and images, have trouble with accurately editing layouts and spatial arrangements in images. They created a method that uses a kind of map (scene graphs) to understand and change the layout based on text instructions while keeping the spatial relationships correct. Their approach improves how well the model keeps things in the right place and makes the editing process easier to follow and control. Tests show their method works better than previous techniques, making layout editing more precise.
Large Language ModelsVision Language Modelsscene graphspatial layout editingtext-conditioned editingspatial coherenceChain of Thought Fine-tuningIntersection over Union (IoU)mean IoU (mIoU)spatial reasoning
Authors
Haoyu Zhen, Xiaolong Li, Yilin Zhao, Han Zhang, Sifei Liu, Kaichun Mo, Chuang Gan, Subhashree Radhakrishnan
Abstract
Large Language Models (LLMs) and Vision Language Models (VLMs) have shown impressive reasoning abilities, yet they struggle with spatial understanding and layout consistency when performing fine-grained visual editing. We introduce a Structured Reasoning framework that performs text-conditioned spatial layout editing via scene-graph reasoning. Given an input scene graph and a natural-language instruction, the model reasons over the graph to generate an updated scene graph that satisfies the text condition while maintaining spatial coherence. By explicitly guiding the reasoning process through structured relational representations, our approach improves both interpretability and control over spatial relationships. We evaluate our method on a new text-guided layout editing benchmark encompassing sorting, spatial alignment, and room-editing tasks. Our training paradigm yields an average 15% improvement in IoU and 25% reduction in center-distance error compared to Chain of Thought Fine-tuning (CoT-SFT) and vanilla GRPO baselines. Compared to SOTA zero-shot LLMs, our best models achieve up to 20% higher mIoU, demonstrating markedly improved spatial precision.