AI summaryⓘ
The authors created SceneCritic, a tool to check indoor room layouts more reliably than previous methods that judge images from different angles. They built a detailed framework called SceneOnto to understand how furniture and objects should be arranged in a space according to real-world examples. SceneCritic points out mistakes in object placement and can test how well different models create or fix room layouts using rules or language/image-based feedback. Their experiments showed SceneCritic matches human opinions better than image-based judges, and that text-only models can sometimes judge layouts more accurately than image models. However, using image feedback works best when refining the spatial details and orientation of objects.
Large Language Models (LLMs)Vision-Language Models (VLMs)floor-plan layoutsSceneCriticSceneOntospatial ontologysemantic coherenceobject orientationmodel refinement3D indoor scene datasets
Abstract
Large Language Models (LLMs) and Vision-Language Models (VLMs) increasingly generate indoor scenes through intermediate structures such as layouts and scene graphs, yet evaluation still relies on LLM or VLM judges that score rendered views, making judgments sensitive to viewpoint, prompt phrasing, and hallucination. When the evaluator is unstable, it becomes difficult to determine whether a model has produced a spatially plausible scene or whether the output score reflects the choice of viewpoint, rendering, or prompt. We introduce SceneCritic, a symbolic evaluator for floor-plan-level layouts. SceneCritic's constraints are grounded in SceneOnto, a structured spatial ontology we construct by aggregating indoor scene priors from 3D-FRONT, ScanNet, and Visual Genome. SceneOnto traverses this ontology to jointly verify semantic, orientation, and geometric coherence across object relationships, providing object-level and relationship-level assessments that identify specific violations and successful placements. Furthermore, we pair SceneCritic with an iterative refinement test bed that probes how models build and revise spatial structure under different critic modalities: a rule-based critic using collision constraints as feedback, an LLM critic operating on the layout as text, and a VLM critic operating on rendered observations. Through extensive experiments, we show that (a) SceneCritic aligns substantially better with human judgments than VLM-based evaluators, (b) text-only LLMs can outperform VLMs on semantic layout quality, and (c) image-based VLM refinement is the most effective critic modality for semantic and orientation correction.