SceneAssistant: A Visual Feedback Agent for Open-Vocabulary 3D Scene Generation
2026-03-12 • Computer Vision and Pattern Recognition
Computer Vision and Pattern Recognition
AI summaryⓘ
The authors developed SceneAssistant, a system that creates 3D scenes from everyday language without limits on object types or their arrangements. It uses advanced models that generate 3D objects and smart language-vision tools to plan and adjust the scene by understanding visual feedback step-by-step. This approach allows the system to better match the text description and place objects more naturally. Their experiments show it works better than previous methods and can also edit existing scenes using language instructions.
Text-to-3D generationVision-Language ModelsSpatial reasoning3D object generationNatural language processingScene compositionVisual feedbackOpen-vocabularyDigital content creationHuman evaluation
Authors
Jun Luo, Jiaxiang Tang, Ruijie Lu, Gang Zeng
Abstract
Text-to-3D scene generation from natural language is highly desirable for digital content creation. However, existing methods are largely domain-restricted or reliant on predefined spatial relationships, limiting their capacity for unconstrained, open-vocabulary 3D scene synthesis. In this paper, we introduce SceneAssistant, a visual-feedback-driven agent designed for open-vocabulary 3D scene generation. Our framework leverages modern 3D object generation model along with the spatial reasoning and planning capabilities of Vision-Language Models (VLMs). To enable open-vocabulary scene composition, we provide the VLMs with a comprehensive set of atomic operations (e.g., Scale, Rotate, FocusOn). At each interaction step, the VLM receives rendered visual feedback and takes actions accordingly, iteratively refining the scene to achieve more coherent spatial arrangements and better alignment with the input text. Experimental results demonstrate that our method can generate diverse, open-vocabulary, and high-quality 3D scenes. Both qualitative analysis and quantitative human evaluations demonstrate the superiority of our approach over existing methods. Furthermore, our method allows users to instruct the agent to edit existing scenes based on natural language commands. Our code is available at https://github.com/ROUJINN/SceneAssistant